Videos and presentation materials from other INI events are also available.
Event  When  Speaker  Title  Presentation Material 

SASW05 
9th January 2012 11:00 to 11:30 
Martin Hyland  Spitalfields Day Welcome: The Mathematical Legacy of Alan Turing  
SASW05 
9th January 2012 11:30 to 12:30 
H Woodin  The global structure of the Turing degrees and the biinterpretability conjecture  
SASW05 
9th January 2012 14:00 to 15:00 
Modern cryptography for noncryptographers  
SASW05 
9th January 2012 15:15 to 16:15 
Measures of Relative Complexity  
SASW05 
9th January 2012 16:30 to 17:30 
A Dawar  On syntactic and semantic complexity classes  
SAS 
11th January 2012 17:00 to 17:30 
Diophantine Machines  
SAS 
11th January 2012 17:30 to 18:00 
I Mackie  Conservation of Data  
SAS 
11th January 2012 18:00 to 18:30 
Extended Goodstein sequences  
SAS 
12th January 2012 16:00 to 16:30 
Promptness, Randomness and Degrees  
SAS 
12th January 2012 16:30 to 17:00 
From Narratology to Formal Narrative Structures: What is Relevant in a Story?  
SAS 
12th January 2012 17:00 to 17:30 
Homophily Law of Networks: Principles, Methods, and Experiments  
SAS 
17th January 2012 16:00 to 16:30 
A Proof Pattern in Algebra  
SAS 
17th January 2012 16:30 to 17:00 
The Logic of Dependence and Independence  
SAS 
17th January 2012 17:00 to 17:30 
Slow Consistency  
SAS 
19th January 2012 16:00 to 16:30 
A Atserias  SheraliAdams Relaxations and Graph Indistinguishability  
SAS 
19th January 2012 16:30 to 17:00 
Decision Methods over Real and Algebraically Closed Fields  
SAS 
24th January 2012 16:00 to 16:30 
Computability Questions about Fields
Many of the concepts standard in computable model theory today trace back to the 1956 paper 'Effective Procedures in Field Theory', by Fröhlich and Shepherdson. There the authors used Turing's definition of a partial computable function to show that many procedures simply could not be carried out in an arbitrary field $F$, even assuming one knows the elements of $F$ and can compute the field operations on those elements. For example, they formalized van der Waerden's intuitive notion that it is not in general decidable whether a polynomial from $F[X]$ has a root in $F$, or whether it is reducible within the polynomial ring $F[X]$. Indeed, they showed that these two questions about polynomials (having a root, and being reducible) are equicomputable: one can be done by a Turing machine if and only if the other can. Their proof generalizes easily to show that the set of reducible polynomials and the set of polynomials with roots are duringequivalent, and Rabin capitalized on these notions in his 1960 paper investigating
effective algebraic closures of fields.
In this talk we will review these notions and then present recent refinements. The result of Turingequivalence between the two sets described above was surprising: it is readily seen that decidability of irreducibility allows one to decide whether a polynomial has a root, but the reduction in the opposite direction is by no means apparent. Fröhlich and Shepherdson gave one such reduction, while another is implicit in Rabin's paper. A third reduction, which holds provided that the computable field has a computable transcendence basis over its prime subfield, was presented by the speaker in a 2010 paper. This new one has the stronger property of being a $1$reduction: the irreducibility of a polynomial $p\in F[X]$ is reduced to the single question of whether another polynomial $q\in F[X]$, computed effectively from $p$, has a root in $F$. In the same paper, the author constructed a computable field, algebraic over the rationals (and thus trivially having a computable transcendence basis), for which there is no $1$reduction in the opposite direction. Therefore, the two questions are indeed of distinct degrees of difficulty, although distinguishing them requires the notion of a $1$reduction, which is finer than Turing reducibility but nevertheless absolutely standard in computability theory. This theorem was recently strengthened by the speaker's Ph.D.\ student Rebecca Steiner, who produced a computable algebraic field in which there is not even any See PDF for full abstract. 

SAS 
24th January 2012 16:30 to 17:00 
Constructive Thoughts on Operator Algebras
Operator algebra theory, in its classical form, is developed in about as nonconstructive a manner as one could imagine: typical existence proofs use contradiction arguments and applications of Zorn's lemma. Finding a viable constructive development of, or perhaps alternative to, operator algebra theory would seem to be a major test of Bishopstyle constructive mathematics (or, indeed, of any other approach to extracting the computational content of a classical theory). In my talk I shall present some of the background, some recent progress, and some of the major problems that lie at the very start of such a development.


SAS 
24th January 2012 17:00 to 17:30 
Automated and Human Proofs in General Mathematics: An Initial Comparison  
SAS 
26th January 2012 16:00 to 16:30 
The "Million Message Attack" in 15 Thousand Message  
SAS 
26th January 2012 16:30 to 17:00 
Reuniting the antipodes: bringing together Nonstandard Analysis and Constructive Analysis  
SASW07 
31st January 2012 09:40 to 09:45 
Welcome: Is Cryptographic Theory Practically Relevant?  
SASW07 
31st January 2012 09:45 to 10:30 
Privacy in Deniable Anonymous Concurrent Authentication with Setup is Impossible: Do we Care?
In this talk we review four stories about theoretical cryptography and their impact in practice. This includes a recent analysis of RC4, the notion of deniable interactive proof, the impossibility of strong privacy in RFID protocols, and the impossibility to UCrealize commitment.


SASW07 
31st January 2012 11:00 to 11:45 
Storage encryption and key management
Data encryption has become a key requirement for enterprise storage systems. As a consequence of this I have looked into storage encryption methods and contributed to several storage security products at IBM. Research has formulated the notion of tweakable encryption modes, which specifically address a requirement of storage encryption. On the other hand, practitioners have used specific keywrapping modes for a long time before researchers came up with a formal notion. We highlight where and how they are used.
The biggest concern in storage encryption are cryptographic keys, which must be maintained securely and reliably. Users struggle with the keymanagement problem because operating procedures and formats differ across systems. When multiple users access a key server, its interface must be designed with special consideration for cryptographic relations among keys. Cryptographic hardwaresecurity modules (HSMs) face the same problem. Some logical attacks through the keymanagement operations of HSMs have been reported in the past, which allowed to expose keys merely by exploiting their interfaces in unexpected ways. We show how to model the security of keymanagement systems formally and protect them from interface attacks. This work originates in the context of creating the OASIS Key Management Interoperability Protocol (KMIP), a new open standard for enterpriselevel key management.


SASW07 
31st January 2012 11:45 to 12:30 
Mathematical and practical security problems not directly related to cryptography
Bored by random oracles, lattices, security reductions and games? This talk is for you! We will describe a number of practical security problems (some of which very mathematical) met while designing security systems. Subjet to available time, we will address the efficient spatial scattering of secrets, the geometrical properties of protective chip wiremeshes, a new breed of system fault protections requiring combinatorial tools, the optimal placement of scramblers, quick preidentification and the use of secure devices to... attack other secure devices


SASW07 
31st January 2012 14:00 to 14:45 
Efficient Verification of ElGamal Ciphertext Shuffles
A shuffle is a permutation and rerandomization of a set of ciphertexts. This means that the input ciphertexts and the output ciphertexts contain the same set of plaintexts but in permuted order. Furthermore, due to rerandomization of the ciphertexts the permutation is hidden. Mixnets often use a sequence of random shuffles performed by different mixservers to hide the link between senders and plaintexts. A common use is found in voting schemes, where a mixnet uses random shuffles to anonymize a set of encrypted votes.
To protect against malicious mixservers it is necessary to verify that the shuffles are correct. Otherwise, a bad mixserver could for instance substitute encrypted votes cast by honest voters with encrypted votes for another candidate. Zeroknowledge proofs can be used to guarantee the correctness of a shuffle without revealing the underlying permutation or anything else. By providing such a zeroknowledge proof the mixserver can prove that it has not substituted any ciphertexts or in any other way deviated from the protocol; but at the same time the link between input ciphertexts and output ciphertexts remains secret.
Zeroknowledge proofs for correctness of a shuffle are complicated beasts but we will present a construction that is both efficient and where the required communication is much smaller than the size of the shuffle itself. We have implemented the zeroknowledge proof and will provide concrete performance measurements for verifying shuffles of ElGamal ciphertexts.


SASW07 
31st January 2012 14:45 to 15:30 
T Shrimpton 
A long answer to the simple question, "Is TLS provably secure?"
TLS is perhaps the Internet's most widely used security protocol, and at its heart is a subprotocol for providing data privacy and integrity, called the TLS Record Protocol. Is the TLS Record Protocol provably secure? A series of papers starting in 2000 delivered the answers (roughly): no, not for all possible underlying encryption schemes; yes, for some of the specific encryption schemes that TLS uses, but only under some impractical assumptions; yes, under less restrictive assumptions, but for a definition of "secure" that is hard to understand; yes, as long as your integrityproviding "tag" isn't too short. We'll explore this line of papers, as well as some interesting attacks that helped to guide the provablesecurity results. In the end, we'll argue that the answer is still "it depends on how you use it" by discussing new results on using secure authenticated encryption (e.g. TLS) as a tunnel between a user and a proxy, through which webpages are requested and downloaded. We'll see that it is surprisingly easy to determine which webpage was visited, even in the presence of some sophisticated efforts to fragment and pad the webpage data prior to entering the provablysecure encryption tunnel.


SASW07 
31st January 2012 16:00 to 16:30 
R Horne & G French 
Scaling Cryptographic Deployments
As use of cryptography has exploded in recent years, so has the vulnerability to exploitation of the multitude of implementations. Organisations such as banks who critically depend on cryptography on a massive scale (both in terms of volume of keys under management and number of different implementations) need: 1. Solutions that enable standardised, central service provision of cryptography, and 2. Implementations of cryptographic standards and protocols that are demonstrably secure. This talk will discuss the requirements and illustrate how research is required to meet them. It will illustrate the realworld applicability of current research to identify weaknesses in implementations of cryptographic protocols.


SASW07 
31st January 2012 16:30 to 17:15 
Analysis of Cryptographic Security APIs
In practice, many developers use cryptography via an application program interface (API) either to a software library or a hardware device where keys are stored and all cryptographic operations take place. Designing such interfaces so that they offer flexible functionality but cannot be abused to reveal keys or secrets has proved to be extremely difficult, with a number of published vulnerabilities in widelyused APIs appearing over the last decade. This talk will discuss recent research on the use of formal methods to specify and verify such interfaces in order to either detect flaws or prove security properties. We will focus on the example of RSA PKCS#11, the most widely used interface for cryptographic devices. We will demonstrate a tool, Tookan, which can reverse engineer the particular configuration of PKCS#11 in use on some device under test, construct a model of the device's functionality, and call a model checker to search for attacks. If an attack is found, it can be executed automatically on the device. We will comment on design principles for the next generation of APIs.


SASW07 
1st February 2012 09:00 to 09:45 
Cryptographic Extraction
Randomness extractors are algorithms that map sources of sufficient minentropy to outputs that are statistically close to uniform. Randomness extraction has become a central and ubiquitous notion in complexity theory and theoretical computer science with innumerable applications and surprising and unifying connections to other notions. Cryptography, too, has greatly benefited from this notion. Cryptographic applications of randomness extractors range from the construction of pseudorandom generators from oneway functions to the design of cryptographic functionality from noisy and weak sources (including applications to quantum cryptography) to the more recent advances in areas such as leakage and exposureresilient cryptography, circular encryption, fully homomorphic encryption, etc.
Randomness extractors have also found important cryptographic uses in practical applications, particularly for the construction of key derivation functions. In many of these applications, the defining property of randomness extractors, namely, statistical closeness of their output to a uniform distribution, can be relaxed and replaced with computational indistinguishability. Extractors that provide this form of relaxed guarantee are called 'computational extractors'.
In this talk I will cover some recent advances in the understanding and applicability of computational extractors with particular focus on their role in building key derivation functions.
As a connection between this talk and the question 'Is Cryptographic Theory Practically Relevant?' see http://tools.ietf.org/html/rfc5869.


SASW07 
1st February 2012 09:45 to 10:30 
The practical application of cryptography to international card payments
The presentation will explore chip card payment systems based on the EMV specifications and their use of and reliance on cryptography. The presentation will begin by reviewing some earlier forms of card payments and then move on to a more detailed analysis of the cryptographic methods used in today's systems and those that may be deployed in the future.


SASW07 
1st February 2012 11:00 to 11:45 
Cryptography with Workbased Corruptions and the Combinatorics of Anonymity
In the setting of cryptographic protocols, the corruption of a party has been viewed as a simple, uniform and atomic operation, where the adversary decides to get control over a party and this party immediately gets corrupted. In this talk, motivated by the fact that different players may require different resources to get corrupted, we introduce the notion of resourcebased corruptions, where the adversary must invest some resources in order to perform corruptions. If the adversary has full information about the system configuration then resourcebased corruptions would provide no fundamental difference from the standard corruption model. However, in the `anonymous' setting (where anonymity is in the sense that such configuration is hidden from the adversary), much is to be gained in terms of efficiency and security. We showcase the power of anonymity in the setting of secure multiparty computation with resourcebased corruptions and prove that anonymity can effectively be used to circumvent impossibility results. Regarding efficiency gains, we show that anonymity can be used to force the corruption threshold to drop from 1/2 to 1/3, in turn allowing the use of more efficient cryptographic protocols in various settings. Joint work with Juan Garay, David Johnson (AT&T), Moti Yung (Google).


SASW07 
1st February 2012 11:45 to 12:30 
HSM Portal – Practical Tools built on Theory
Cryptomathic has been working on a new approach to applications cryptography, inspired by the need to give useful and sound building blocks to applications developers, rather than flexible but dangerous tools. This is partly based on some ideas and work by George French, Senior Security Architect, Barclays. The talk describes novel features of our âM portalâ
a cryptographic service which offers highlevel primitives to applications, and abstracts away dirty detail of hardware security module and software cryptography and key management. Using a highlevel approach allows the service to offer primitives closer to the theoretical models of black box cryptography than are ordinarily provided by a simple crypto library. Our talk will discuss why we believe the gap between applications developers requiring cryptography and security engineers/theorists is ever widening. We believe new tools are needed and proposed theoretical ideas need to be brought to life to bridge this gap, lest practice become totally disconnected from theory in the future.


SASW07 
1st February 2012 14:00 to 14:45 
Theory and practice for hash functions
In the last seven years we have witnessed a surge in the cryptanalysis of hash functions. The most visible result was the cryptanalysis of MD5 and SHA1 by Wang et al., but there have been other results including multicollision attacks, long message second preimage attacks, and rebound attacks. There has also been substantial progress in understanding security definitions and models (e.g. indifferentiability) and a large number of security reductions has been proven. In 2007, NIST has launched the call for the SHA3 competition. In 2008 more than 60 submissions were received, which makes this the largest open cryptographic competition to date. In this talk we will discuss the impact of the research on hash functions on practice and the interaction between theory and practice in the SHA3 competition.


SASW07 
1st February 2012 14:45 to 15:30 
C Paar 
Lessons Learned from Four Years of Implementation Attacks against RealWorld Targets
Over the last few years we were able to break various realworld security systems using various flavours of physical attacks. About three years ago we were able to break KeeLoq, which is a 64 bit block cipher that is popular for remote keyless entry (RKE) systems. Even though the attack seems almost straightforward in hindsight, there where many practical and theoretical problems to overcome. More recently we were able to break certain types of the DESFire contactless smart card, which are widely used, e.g., for payment application. We also completely broke the bit stream encryption used in Xilinx FPGAs. In all both cases we were able to recover the keys for either 3DES or AES using power analysis attacks. In contrast to KeeLoq, both 3DES and AES are considered very secure from a classical cryptanalyitical point of view. Interesingly, the realworld implications of these keyextraction attacks are highly dependend on the system design (and not on the cipher used). In addition to summarizing the above mentioned work, I will try to draw some meaningful conclusions. This includes the often considerable practial hurdles an attacker has to overcome and the important role that system design plays.


SASW07 
1st February 2012 16:00 to 16:45 
G Danezis 
Secure metrology: From theory to practice
Over that past year our team has developed a portfolio of technologies to address privacy issues in smart metering. These were inspired by cryptographic techniques well established in research, but unheard of in the domain of embedded systems security. In this talk I chart our journey from the design of the techniques, to our experiences implementing them to be part of a real smart meter. Considerations such as memory footprint, stateless interfaces, and compatibility with communications protocols, not mere security properties, ended up being the deciding factor.


SASW07 
2nd February 2012 09:00 to 09:45 
PracticeDriven Cryptographic Theory
Cryptographic standards abound: TLS, SSH, IPSec, XML Encryption, PKCS, and so many more. In theory the cryptographic schemes used within these standards solve well understood problems, yet a parade of damaging attacks leave us with the question: What gives? Theoreticians often suggest (at least in private) that the problems are wellunderstood and attacks arise because standardizers misunderstand cryptographic theory. I'll use some of my recent work which uses provablesecurity techniques to analyze important standards (including TLS, HMAC, and PKCS#5) to argue that, just as often, it is the theoreticians who don't have all the answers: analyzing practicallyuseful cryptography requires pushing models and proof techniques in neverbeforeconsidered directions. We'll see how (what I'll call) practicedriven cryptographic theory can lead to new understanding and improved confidence in cryptographic practice.
This talk will cover joint work with Mihir Bellare, Yevgeniy Dodis, Kenneth Paterson, Thomas Shrimpton, Neils Fergeson, John Steinberger, and Stefano Tessaro.


SASW07 
2nd February 2012 09:45 to 10:30 
H Shacham 
Cars and Voting Machines: Embedded Systems in the Field
How well are the tools of modern cryptography employed in fielded embedded systems? How are the common tasks of communication and authentication, key storage and distribution, and firmware update and verification performed? In this talk, we describe evidence gathered from several studies of deployed embedded systems: a modern massmarket automobile and two electronic voting machines. These studies consisted of substantial reverseengineering efforts by large teams of researchers. We find that in many cases the designers of the systems we studied are getting simple cryptographic tasks wrong. These failures suggest a lack of engagement with the cryptography and security research community. We consider some reasons for the status quo, and some ways that it might be improved. Joint work with Danny Anderson, Stephen Checkoway, Alexei Czeskis, Ariel Feldman, Edward Felten, J. Alex Halderman, Srinivas Inguva, Brian Kantor, Tadayoshi Kohno, Karl Koscher, Damon McCoy, Shwetak Patel, Eric Rescorla, Franziska Roesner, Stefan Savage, and Dan Wallach.


SASW07 
2nd February 2012 11:00 to 11:45 
From Cryptographer's Cryptography to Engineer's Crypto
Cryptography has become an increasingly important subject, not only for cryptographers but also for computer engineers. Having been involved for many years in research and industrial consultation in cryptography and its practical applications, I have become very aware of the huge gap between the cryptographer's and the engineer's point of view regarding cryptography. In this talk, I would like to share my experiences with the workshop participants, and will take cryptographic functionalities of a TPM (trusted platform module), as an interesting example, to discuss how to create a healthy life circle in cryptography. This will include a smooth link between academic research, international standards, industrial products and real usages.


SASW07 
2nd February 2012 11:45 to 12:30 
D Wikström 
Verificatum  An efficient and provably secure mixnet
A common component in electronic election schemes is a so called 'mixnet'. This is a protocol executed by a group of servers that takes a list of ciphertexts and outputs the plaintexts in sorted order without revealing anything about the correspondence between input ciphertexts and output plaintexts. Thus, a mixnet can securely perform the tabulation in an electronic election where voters submit encrypted votes. Verificatum is an open source implementation of an efficient and provably secure mixnet (see http://www.verificatum.org). In its simplest form it is a Java application that is easy to install and use, but it can also exploit optimized C code, it can be configured in many ways, and it is a perfect platform for implementing more advanced election schemes. It has already been used successfully in the Wombat voting project. We will present Verificatum and describe the work we have done so far and what lies ahead.


SASW07 
2nd February 2012 14:00 to 14:45 
Key Exchange: Security Models and Automatic Analysis
During the last 20 years many gamebased security models have been proposed for the analysis of Key Exchange protocols. The intent of these models is to prove Key Exchange protocols secure in the presence of increasingly powerful adversaries. In this talk, we present the main ingredients of these models, and relate them to practical threat models.
We highlight both benefits and drawbacks of the way in which these security models are currently defined. Additionally, we present to what extent we can currently provide automatic analysis for Key Exchange protocols. We show how we use automatic analysis for evaluating existing security models as well as for developing alternative security models.


SASW07 
2nd February 2012 14:45 to 15:30 
D McGrew 
Problems in Cryptographic Standards and Implementations
In theory, we understand how to provide security through cryptography, yet too often practice does not live up to this promise. In standards, cryptographic imperatives compete with other pragmatic needs. This work seeks to understand those noncryptographic needs and shed light on how they impact cryptographic security. We survey security failures in cryptographic standards and implementations, and analyze common problems. For standards, we consider the example of problems with authentication and the slow but steady adoption of authenticated encryption. For implementations, we review reported vulnerabilities and assess typical misuses and failure modes. Lastly, we suggest some ways that the research and standards communities can collaborate.


SAS 
7th February 2012 16:00 to 16:30 
Homotopy Type Theory and the Structure Identity Principle  
SAS 
7th February 2012 16:30 to 17:00 
A logicians approach to Network Coding  
SAS 
7th February 2012 17:00 to 17:30 
A BrookeTaylor  The Bousfield lattice from a settheoretic perspective  
SAS 
9th February 2012 16:00 to 16:30 
G Primiero  Practical Reasoning with Proofs and Types  
SAS 
9th February 2012 16:30 to 17:00 
Proof complexity and search problems  
SAS 
14th February 2012 16:00 to 16:30 
Lowness for Information  
SAS 
14th February 2012 16:30 to 17:00 
Computable Structures of High Scott Rank  
SAS 
14th February 2012 17:00 to 17:30 
Number theoretic consequences of Ramsey theoretic principles  
SAS 
16th February 2012 16:00 to 16:30 
Real closed fields and models of Peano Arithmetic  
SAS 
16th February 2012 16:30 to 17:00 
The AEtheory of the partial ordering of the Sigma^0_2enumeration degrees  
SAS 
16th February 2012 17:00 to 17:30 
Formal mathematics and natural language  
SAS 
17th February 2012 17:30 to 18:30 
J Prager  IBM Watson from Jeopardy! to Healthcare. Could a quizshow winning computer advise your doctor?  
SAS 
18th February 2012 16:25 to 17:25 
R Rojas  Zuse and Turing  
SAS 
18th February 2012 17:40 to 18:40 
L Moran  The laws fascination with homosexual acts: Insights from the Wolfenden review archive  
SAS 
19th February 2012 10:00 to 11:00 
What does Watson understand? How recent AI activities relate to Turing's vision  
SAS 
19th February 2012 11:15 to 12:15 
From Mind to Turing to Mind  
SAS 
19th February 2012 13:45 to 14:45 
Pattern Formation and Turing Pattern in Developing Cell Systems  
SAS 
19th February 2012 14:45 to 15:45 
Turing and Modern Cryptography  
SAS 
19th February 2012 16:00 to 17:00 
The contribution of Alan Turing to British intelligence  
SAS 
21st February 2012 16:00 to 16:30 
C Freer  Computability and probabilistic symmetries  
SAS 
21st February 2012 16:30 to 17:00 
Big open problems: a discussion session  
SAS 
23rd February 2012 16:00 to 16:30 
M Davis  Universality and Computation  
SAS 
23rd February 2012 16:30 to 17:00 
Proofs and Computations  
SAS 
23rd February 2012 17:00 to 17:30 
The structure of Weihrauch degrees  
SAS 
28th February 2012 16:00 to 16:30 
Predictably terminating computations  
SAS 
28th February 2012 16:30 to 17:00 
Towards transfinite dynamical systems  
SAS 
1st March 2012 13:30 to 15:00 
Meeting of the Interest Group on Formal Mathematics: Finegrained mathematical dependencies in the Mizar proof assistant  
SAS 
1st March 2012 16:00 to 16:30 
Modelcomparison games with algebraic rules  
SAS 
1st March 2012 16:30 to 17:00 
Things I have been thinking about recently, and things I would like to do  
SAS 
1st March 2012 17:00 to 17:30 
Beyond inductive definitions  inductionrecursion, inductioninduction, coalgebras
Our proof theoretic programme, which is a form of a revised
Hilbert's Programme, is to prove the
consistency of proof theoretically strong theories via
an ordinal analysis to predicatively justified extensions
of MartinLoef Type Type Theory. This gives rise to new
data types, which can be used outside type theory as well.
We will report on 3 extensions found this way: inductionrecursion,
inductioninduction and weakly final coalgebras in dependent
type theory.
Inductionrecursion was originally introduced by Peter Dybjer in order to obtain a type theory which includes all possible extensions of type theory considered as acceptable at that time. We consider a closed formalisation by having one data type the elements of which are representatives of arbitrary inductiverecursive definitions. This data type has been applied to generic programming. A (proof theoretic weak) variant of inductionrecursion is called inductioninduction, and has been investigated in collaboration with my PhD student Fredrik Forsberg. Variants of this concept occur frequently in mathematics, when one defines a set simultaneously with an order on it, and both are defined mutually inductively. An example are Conway's surreal numbers. Finally we look at weakly final coalgebras in dependent type theory, where the role of elimination and and introduction rules are interchanged. 

SAS 
6th March 2012 16:00 to 16:30 
Fully Homomorphic Encryption  
SAS 
6th March 2012 16:30 to 17:00 
G de Miguel Casado  Bridging Formal Methods in Computer Science for Scientific Computing and Cognitive Science Modelling Tasks  
SAS 
6th March 2012 17:00 to 17:30 
Synthesis from Components  
SAS 
8th March 2012 13:30 to 15:00 
P Aczel  Literate Proving: Informal Discussion Group  
SAS 
8th March 2012 16:00 to 16:30 
K AmbosSpies  On the strongly bounded Turing degrees of the computably enumerable sets  
SAS 
8th March 2012 16:30 to 17:00 
Hard tautologies  
SAS 
13th March 2012 16:00 to 16:30 
Two visions from Turing: Universality and the power of constraining resources  
SAS 
13th March 2012 16:30 to 17:00 
C BeckerAsano 
Computational modeling of emotions
With the advance of virtual reality and social robotics computer scientists became increasingly interested in modeling inherently human, interpersonal states and processes. The quest of constructing machines that behave appropriately in direct human interaction affords to integrate social competence, which in turn includes a robot's ability to deal with such soft concepts as ``emotions.''
After a short motivation I give an overview of a selection of the many theories that emotion psychology has to offer. In particular, one possible distinction of three classes of emotions is being motivated, namely that of primary, secondary, and social emotions. These classes serve as basis for the the introduction of ``WASABI'', a computational model of emotions, in which only the first two classes can be represented so far. Thus, in the end it is discussed, how dynamic epistemic logic might be used as basis for the additional integration of social emotions such as ``embarrassment'', ``shame'', or ``guilt.'' The ideas concerning the connection between dynamic epistemic logic and social emotions are joint work with Bernhard Nebel, Benedikt Löwe, Andreas Witzel, and Yanjing Wang. 

SAS 
13th March 2012 17:00 to 17:30 
I Kalimullin  Limitwise monotonicity spectra and uniformity  
SASW08 
14th March 2012 13:50 to 14:40 
Far from Turing? Turing’s Paradigm in Physiology
Turing’s original model was a linear instability in a 2 variable PDE, describing imaginary “morphogens” reacting and diffusing in a 2D domain. Since the discovery of physiological morphogens in the past few decades, even this simple model has had successful applications. The growing maturity of the applications has now led modelers to more complex scenarios. Developments have included the extension of the original model to include cell density variables, the inclusion of mechanical factors, the extension to 3D spatial domains, and the study of patterns, such as isolated spots, that occur far from the linear instability first studied by Turing. We will review examples of these new developments in the field of physiology and pathophysiology.


SASW08 
14th March 2012 14:50 to 15:40 
Integrating experiment and theory to elucidate the chemical basis of hair and feather morphogenesis
In his seminal 1952 paper, ‘The Chemical Basis of Morphogenesis’, Alan Turing lays down a milestone in the application of theoretical approaches to understand complex biological processes. The molecular revolution that has taken place during the six decades following this landmark publication has now placed this generation of theoreticians and biologists in an excellent position to rigorously test the theory and, encouragingly, a number of systems have emerged that appear to conform to some of Turing’s fundamental ideas. In this talk I will describe how the integration between experiment and theory has been used to enhance understanding in a model system of embryonic patterning: the emergence of feathers and hair in the skins of birds and mammals.


SASW08 
14th March 2012 15:50 to 16:10 
M Grace & MT Hütt 
Spiralwave prediction in a lattice of FitzHughNagumo oscillators
In many biological systems, variability of the components can be expected to outrank statistical fluctuations in the shaping of selforganized patterns. The distribution of singleelement properties should thus allow the prediction of features of such patterns.
In a series of previous studies on established computational models of Dictyostelium discoideum pattern formation we demonstrated that the initial properties of potentially very few cells have a driving influence on the resulting asymptotic collective state of the colony [1,2].
One plausible biological mechanism for the generation of variability in cell properties and of spiral wave patterns is the concept of a ‘developmental path’, where cells gradually move on a trajectory through parameter space.
Here we review the current state of knowledge of spiralwave prediction in excitable systems and present a new onedimensional developmental path based on the FitzHughNagumo model, incorporating parameter drift and concomitant variability in the distribution of cells embarking on this path, which gives rise to stable spiral waves. Such a generic model of spiral wave predictability allows new insights into the relationship between biological variability and features of the resulting spatiotemporal pattern.
[1] Geberth, D. and Hütt, M.Th. (2008) Predicting spiral wave patterns from cell properties in a model of biological selforganization. Phys. Rev. E 78, 031917. [2] Geberth, D. and Hütt, M.Th. (2009) Predicting the distribution of spiral waves from cell properties in a developmentalpath model of Dictyostelium pattern formation. PLoS Comput. Biol 5, e1000422.


SASW08 
14th March 2012 16:10 to 16:30 
Transport and development in fungal networks
Multicellular organisms have evolved sophisticated systems to supply individual cells with the resources necessary for survival. Plants circulate nutrients through the xylem and phloem, mammals have cardiovascular systems, but how do fungi translocate materials? Cordforming fungi form extensive networks that continuously adapt to their surroundings, but what is the developmental logic of such fungal networks, and how does fungal morphology enable efficient transport? In this talk I shall address these fundamental questions, and present the concept of growthinduced mass flows. The key idea is that aqueous fluids are incompressible, so as the fluid filled vessel expand, there must be movement of fluid from the sites of water uptake to the sites of growth. We have developed a model of delivery in growing fungal networks, and found good empirical agreement between our model and experimental data gathered using radiolabeled tracers. Our results lead us to suggest that in fora ging fungi, growthinduced mass flow is sufficient to account for long distance transport, if the system is well insulated. We conclude that active transport mechanisms may only be required at the very end of the transport pathway, near the growing tips.


SASW08 
14th March 2012 16:30 to 16:50 
The Sensitivity of Turing's Pattern Formation Mechanism
The prospect of long range signalling by diffusible morphogens initiating large scale pattern formation has been contemplated since the initial work of Turing in the 1950s and has been explored theoretically and experimentally in numerous developmental settings. However, Turing’s pattern formation mechanism is notorious for its sensitivity to the details of the initial conditions suggesting that, in isolation, it cannot robustly generate pattern within noisy biological environments. Aspects of developmental selforganisation, in particular a growing domain, have been suggested as a mechanism for robustly inducing a sequential cascade of selforganisation, thus circumventing the difficulties of sensitivity. However the sensitivity question emerges once more for generalisations of Turing's model which include further biological aspects, for example, the inclusion of gene expression dynamics: this will be explored in detail.


SASW08 
14th March 2012 17:10 to 18:00 
Turing's instability versus crossdiffusiondriven instability
Turing' diffusiondriven instability has been observed in diverse complex and regularized spatio and/or temporal patterns in not only scientific but also engineering fields. Another instability on pattern formation, such ascrossdiffusiondriven instability or chemotactic instability is discussed in ecological and biological systems. In this talk, I am concerned with the relation between two types of instabilities above. As an application, I discuss selforganized aggregation of biological individuals with aggregation pheromones.
Keywords: crossdiffusion, active aggregation, infinite dimensional relaxation oscillation.


SASW08 
14th March 2012 18:10 to 19:00 
Patterns of Sources and Sinks in Oscillatory Systems
In oscillatory systems, a fundamental spatiotemporal pattern is wavetrains, which are spatially periodic solutions moving with constant speed (also known as periodic travelling waves). In many numerical simulations, one observes finite bands of wavetrains, separated by sharp interfaces known as "sources" and "sinks". This talk is concerned with patterns of sources and sinks in the complex GinzburgLandau equation with zero linear dispersion; in this case the CGLE is a reactiondiffusion system. I will show that patterns with large sourcesink separations occur in a discrete family, due to a constraint of phaselocking type on the distance between a source and its neighbouring sinks. I will then consider the changes in sourcesink patterns in response to very slow increases in the coefficient of nonlinear dispersion. I will present numerical results showing a cascade of splittings of sources into sinksourcesink triplets, culminating in spatiotemporal chao s at a parameter value that matches the change in absolute stability of the underlying periodic travelling wave. In this case the gradual change in pattern form represents an ordered, structured transition from a periodic solution to spatiotemporal chaos. The work that I will present was done in collaboration with Matthew Smith (Microsoft Research. Cambridge) and Jens Rademacher (CWI, Amsterdam).


SASW08 
15th March 2012 09:30 to 10:20 
Experimental and Modeling Studies of Turing Patterns in Chemical Systems
I will review three decades of work on Turing patterns in our laboratory, from the discovery of pattern formation in the CIMA reaction to recent experiments on threedimensional patterns using tomographic techniques. I will discuss modeling efforts (including the LengyelEpstein model), approaches to designing systems that display Turing patterns, the effects of external perturbation, and the role of “structured media” (gels, microemulsions, surfaces).


SASW08 
15th March 2012 10:40 to 11:00 
A slow pushed front in a LotkaVolterra competition model
We study the existence and stability of a traveling front in the LotkaVolterra competition model when the rate of diffusion of one species is small. This front represents the invasion of an unstable homogeneous state by a stable one. It is noteworthy in two respects. First, we show that this front is the selected, or critical, front for this system. We utilize techniques from geometric singular perturbation theory and geometric desingularization. Second, we show that this front appears to be a pushed front in all ways except for the fact that it propagates slower than the linear spreading speed. We show that this is a result of the linear spreading speed arising as a simple pole of the resolvent instead of a branch pole. Using the pointwise Green's function, we show that this pole poses no a priori obstacle to stability of the nonlinear traveling front.


SASW08 
15th March 2012 11:00 to 11:20 
C Venkataraman 
Turing patterns on growing surfaces
We investigate models for biological pattern formation via reactiondiﬀusion systems posed on continuously evolving surfaces. The nonlinear reaction kinetics inherent in the models and the evolution of the spatial domain mean that analytical solutions are generally unavailable and numerical simulations are necessary. In the ﬁrst part of the talk, we examine the feasibility of reactiondiﬀusion systems to model the process of parr mark pattern formation on the skin surface of the Amago trout. By simulating a reactiondiﬀusion system on growing surfaces of diﬀering mean curvature, we show that the geometry of the surface, speciﬁcally the surface curvature, plays a central role in the patterns generated by a reactiondiﬀusion mechanism. We conclude that the curvilinear geometry that characterises ﬁsh skin should be taken into account in future modelling endeavours. In the second part of the talk, we investigate a model for cell motility and chemotaxis. Our model consists of a surface reactiondiﬀusion system that models cell polarisation coupled to a geometric evolution equation for the position of the cell membrane. We derive a numerical method based on surface ﬁnite elements for the approximation of the model and we present numerical results for the migration of two and three dimensional cells.


SASW08 
15th March 2012 11:20 to 11:40 
Influence of celltocell variability on spatial pattern formation
Many spatial patterns in biology arise through differentiation of selected cells within a tissue, which is regulated by a genetic network. This is specified by its structure, its parameterisation, and the noise on its components and reactions. The latter, in particular, is not well examined because it is rather difficult to trace. Using experimental data on trichomes, i.e., epidermal plant hairs, as an example, we examine the variability in pattern formation that is due to small differences among the cells involved in the patterning process. We employ suitable local mathematical measures based on the Voronoi diagram of the trichome positions to determine the noise level in of the pattern. Although trichome initiation is a highly regulated process we show that the experimentally observed trichome pattern is substantially disturbed by celltocell variations. Using computer simulations we find that the rates concerning the availability of the protein complex which triggers trichome formation plays a significant role in noiseinduced variations of the pattern. The focus on the effects of celltocell variability yields further insights into pattern formation of trichomes. We expect that similar strategies can contribute to the understanding of other differentiation processes by elucidating the role of naturally occurring fluctuations in the concentration of cellular components or their properties.


SASW08 
15th March 2012 12:00 to 12:50 
Collision dynamics in dissipative systems
Spatially localized dissipative structures are ubiquitous such as vortex, chemical blob, discharge patterns, granular patterns, and binary convective motion. When they are moving, it is unavoidable to observe various types of collisions. One of the main questions for the collision dynamics is that how we can describe the large deformation of each localized object at collision and predict its output. The strong collision usually causes topological changes such as merging into one body or splitting into several parts as well as annihilation. It is in general quite difficult to trace the details of the deformation unless it is a very weak interaction. We need a change in our way of thinking to solve this issue. So far we may stick too much to the deformation of each localized pattern and become shrouded in mystery. We try to characterize the hidden mechanism behind the deformation process instead. It may be instructive to think about the following metaphor: the droplet falling down the landscape with many valleys and ridges. The motion of droplets on such a rugged landscape is rather complicated; two droplets merge or split at the saddle points and they may sin k into the underground, i.e., annihilation. On the other hand, the profile of the landscape remains unchanged and in fact it controls the behaviors of droplets. It may be worth to describe the landscape itself rather than complex deformation, namely to find where is a ridge or a valley, and how they are combined to form a whole landscape. Such a change of viewpoint has been proposed recently claiming that the network of unstable patterns relevant to the collision process constitutes the backbone structure of the deformation process, namely the deformation is guided by the connecting orbits among the nodes of the network. Each node is typically an unstable ordered pattern such as steady state or timeperiodic solution. This view point is quite useful not only for the problems mentioned above but also for more generalized collision problems, especially, the dynamics in heterogeneous media is one of the interesting applications, since the encounter with heterogeneity can be regarded as a collision. Similarly questions of adaptability to external environments in biological systems fall in the above framework when they are reformulated in an appropriate way. In summary a highly unstable and transient state arising in collision problems is an organizing center which produces various outputs in order to adjust the emerging environments.


SAS 
15th March 2012 13:30 to 15:00 
Computational content of coinductive proofs (Interest Group in Formal Mathematics)  
SASW08 
15th March 2012 14:00 to 14:50 
M Dahlem 
Localized travelling pulses in mirgaine
Cortical spreading depression (SD) is a massive but transient perturbation in the brain's ionic homoeostasis. It is the underlying cause of neurological symptoms during migraine. I present a mechanism by which localized SD pulse segments are formed as longlasting but transient patterns in a subexcitable medium, in which the homogeneous steady state is a global attractor. Initial perturbed states can develop into distinct transient pulses caused by a ghost of a saddlenode bifurcation that leads to a slow passage through a bottleneck. The location of the bottleneck in phase space is associated with a characteristic form (shape, size) of the pulse segment that depends on the curvature of the medium, i.e., the human cortex. Similar patterns have been observed with fMRI and in patient's symptom reports. The emerging transient patterns and their classification according to size and duration offers a modelbased analysis of phasedepended stimulation protocols for noni nvasive neuromodulation devices, e.g. utilizing transcranial magnetic stimulation, to intelligently target migraine.


SASW08 
15th March 2012 15:10 to 15:30 
Spatial Patterns for ReactionDiffusion Systems with Unilateral Conditions
It is wellknown that Turing's effect can lead to stationary spatial pattern in reactiondiffusion systems of activatorinhibitor or substratedepletion type. However, it is also known that this effect can occur only under certain assumptions about the diffusion speed. The aim of the talk is to discuss that conditions of unilateral type (obstacles) of various kind can lead to bifurcation of stationary spatial patterns even in a regime where classically no such phenomenon would occur.


SASW08 
15th March 2012 15:30 to 15:50 
M Pellicer 
A PDE approach for the dynamics of the inflammatory stage in diabetic wound healing
Wound healing is an extremely complicated process and still not fully understood, moreover when diabetis mellitus is present. The inflammatory phase, the first one of this process, is where there exists a major difference between diabetic and nondiabetic wound healing. Here, we present a work in progress related with the modeling and analysis of the dynamics of some of the main agents involved in this first phase.
We propose a reactiondiffusion system as a model for these dynamics. This model aims at generalizing the previous existing approach of J.Sherratt and H.Waugh, where an ODE system (only taking into account the time variable) is proposed as a simplified model for this situation.
After obtaining this PDE approach, the wellposedness of the problem will be stated (both in a mathematical and a biological sense) and we will present some results related with the equilibria of the system. Finally, we will show some numerical simulations to illustrate the previous results.
This is a joint work with Neus Consul (Universitat Politecnica de Catalunya) and Sergio M. Oliva (Universidade de Sao Paulo).


SASW08 
15th March 2012 15:50 to 16:10 
Selforganisation of Cell Asymmetry : Turing’s inspiration is alive in a single cell
The development of multicellular organisms starts from a single fertilized egg but it finally involves the specification of diverse cell types. Such diversity is created via an asymmetric cell division, which is crucial to determine a distinct fate of the daughter cells and the most fundamental body plan for constructing a complex body system. The experimental investigation of the molecular levels proposes that differentially segregated protein or RNA determinants in the inside of a cell play a key role in the asymmetric cell divisions processes. The localization of specific proteins during an asymmetric cell division process is commonly observed in many model organisms, though related specific proteins or each step is observed in a slightly different way.
Nonetheless, the schematic mechanism of the cell asymmetry division is still remained elusive. Moreover, the mechanism by which related proteins or RNA determinants can be localised to a specific region in a micro scale of small single cell is fully remained as a mystery. Thus to understand the mechanism by integrating the individual information of the molecular level is highly required.
In this presentation, we present a mathematical model describing the asymmetric division process of C. elegans embryo cell. In particular, we explore the cortical flow effect on the localisation of membrane posterior PAR proteins and discuss the robustness of patterning length and timing arising in the establishment phase of the cell division. Finally, we show that how Turing’s spirit is alive in a single cell.


SASW08 
15th March 2012 16:10 to 16:30 
Stochastic reaction and diffusion on growing domains: understanding the breakdown of robust pattern formation
All biological patterns, from population densities to animal coat markings, can be thought of as heterogeneous spatiotemporal distributions of mobile agents. Many mathematical models have been proposed to account for the emergence of this complexity, but, in general, they have consisted of deterministic systems of differential equations, which do not take into account the stochastic nature of population interactions. One particular, pertinent, criticism of these deterministic systems, is that the exhibited patterns can often be highly sensitive to changes in initial conditions, domain geometry, parameter values, etc. Due to this sensitivity, we seek to understand the effects of stochasticity and growth on paradigm biological patterning models. We extend spatial Fourier analysis and growing domain mapping techniques to encompass stochastic Turing systems. Through this we find that the stochastic systems are able to realise much richer dynamics than their deterministic counter parts, in that patterns are able to exist outside the standard Turing parameter range. Further, it is seen that the inherent stochasticity in the reactions appears to be more important than the noise generated by growth, when considering which wave modes are excited. Finally, although growth is able to generate robust pattern sequences in the deterministic case, we see that stochastic effects destroy this mechanism of pattern doubling. However, through Fourier analysis we are able to suggest a reason behind this lack of robustness and identify possible mechanisms by which to reclaim it.


SASW08 
15th March 2012 16:50 to 17:40 
Turing pattern formation without diffusion
The reactiondiffusion mechanism, presented by AM Turing more than 60 years ago, is currently the most popular theoretical model explaining the biological pattern formation including the skin pattern. This theory suggested an unexpected possibility that the skin pattern is a kind of stationary wave (Turing pattern or reactiondiffusion pattern) made by the combination of reaction and diffusion. At first, biologists were quite skeptical to this unusual idea. However, the accumulated simulation studies have proved that this mechanism can not only produce various 2D skin patterns very similar to the real ones, but also predict dynamic pattern change of skin pattern on the growing fish. Now the Turing’s theory is accepted as a hopeful hypothesis, and experimental verification of it is awaited. Using the pigmentation pattern of zebrafish as the experimental system, our group in Osaka University has been studying the molecular basis of Turing pattern formation. We have identified the genes related to the pigmentation, and visualized the interactions among the pigment cells. With these experimental data, it is possible to answer the crucial question, “How is the Turing pattern formed in the real organism?” The pigmentation pattern of zebrafish is mainly made by the mutual interactions between the two types of pigment cells, melanophores and xanthophores. All of the interactions are transferred at the tip of the dendrites of pigment cells. In spite of the expectation of many theoretical biologists, there is no diffusion of the chemicals involved. However, we also found that the lengths of the dendrites are different among the interactions, which makes it possible to generate the conditions of Turing pattern formation, “local positive feedback and long range negative feedback”. Therefore, we think it is appropriate to call the identified mechanism as a Turing mechanism although it does not contain any diffusion.


SASW08 
15th March 2012 17:50 to 18:40 
Pattern formation in active fluids
Biological matter is inherently dynamic and exhibits active properties. A key example is the force generation by molecular motors in the cell cytoskeleton. Such active processes give rise to the generation of active mechanical stresses and spontaneous flows in gellike cytoskeletal networks. Active material behaviors play a key role for the dynamics of cellular processes such as cell locomotion and cell division. We will discuss intracellular flow patterns that are created by active processes in the cell cortex. By combining theory with quantitative experiments we show that observed flow patterns result from profiles of active stress generation in the system. We will also consider the situation where active stress is regulated by a diffusing molecular species. In this case, spatial concentration patterns are generated by the interplay of stress regulation and selfgenerated flow fields.


SASW08 
16th March 2012 09:30 to 10:20 
Turing and NonTuring Type Pattern in Interacting Cell Systems
Examples for pattern formation in interacting cell sytems will be discussed, which result from direct cellcell interaction and cellular motion. The analysis of the respective mathematical models  systems of partial differential equations of hyperbolic type and integrodifferential equations  is partily done on the linearized level and partly done for suitable approximations. The longtime behavior of these models is discussed and the resulting patterns are set into context with experimental findings.


SASW08 
16th March 2012 10:40 to 11:00 
Linear Stability Analysis of Turing Patterns on Arbitrary Manifolds
Alan Turing’s mathematical model for pattern formation, based on linear instabilities in reactiondiffusion systems, has been widely applied in chemistry, biology and physics. Most of the modelling applications have been implemented on flat two dimensional domains, even though many of the patterns under investigation, including the celebrated application to animal coat patterns, occur on nonEuclidean two dimensional manifolds. In this work we have described an extension of Turing instability analysis to arbitrary manifolds. Our approach is simple to implement and it readily enables exploration of the effect of the geometry on Turing pattern formation in reactiondiffusion systems on arbitrarily shaped and sized domains.


SASW08 
16th March 2012 11:00 to 11:20 
Pattern formation during growth development: models, numerics and applications
Mathematical modelling, numerical analysis and simulations of spatial patterning during growth development in developmental biology and biomedicine is an emerging young research area with significant potential of elucidating mechanisms for pattern formation on real biological evolving skin surfaces. Since the seminal work of Turing in 1952 which showed that a system of reacting and diffusing chemical morphogens could evolve from an initially uniform spatial distribution to concentration profiles that vary spatially  a spatial pattern  many models have been proposed on stationary domains exploiting the generalised patterning principle of shortrange activation, longrange inhibition elucidated by Meinhardt of which the Turing model is an example. Turing's hypothesis was that one or more of the morphogens played the role of a signaling chemical, such that cell fate is determined by levels of morphogen concentration. However, our recent results show that in the presence o f domain growth, shortrange inhibition, longrange activation as well as activatoractivator mechanisms have the potential of giving rise to the formation of patterns only during growth development of the organism. These results offer us a unique opportunity to study nonstandard mechanisms, either experimentally or hypothetically, for pattern formation on evolving surfaces, a largely unchartered research area. In this talk I will discuss modelling, numerical analysis, computations and applications of the models for pattern formation during growth.


SASW08 
16th March 2012 11:20 to 11:40 
The influence of nonstandard boundary conditions on the generation of spatial patterns
The influence of certain unilateral boundary or interior conditions to spatial Turing's patterns described by reactiondiffusion systems will be discussed. The conditions considered can model sources reflecting concentration in their neighbourhood in the following way. If the concentration exceeds a given threshold then the source is inactive, when the concentration is about to decrease below the threshold then the source either prevents it or at least decelerates the decrease by producing a morphogen (or ligand) and supplementing it into an extracellular space. Some interesting consequences follow. For instance, spatial patterns can arise in general for an arbitrary ratio of diffusion speeds, e.g. for fast diffusion of activator and slow diffusion of inhibitor (the opposite situation than in Turing's original idea), and can arise also for arbitrarily small domains. Simple numerical simulations using a model proposed for a description of pigmentation in animals (in particular felids) promise to describe patterns with spots and, moreover, with a darker strip along the spine, which are observed among some felids. The unilateral conditions can be described mathematically by variational inequalities or inclusions. A more detailed explanation of the theory should be a subject of the talk of Martin Vaeth.


SASW08 
16th March 2012 12:00 to 12:50 
Pattern formation in multiscale systems: Homogenization and beyond
Various relevant pattern formation applications are observed in intrinsically heterogeneous reactiondiffusion systems. We derive a simple homogenization scheme and demonstrate that the resulting effective equations are sufficient to qualitatively reproduce the rich pattern dynamics of wave and Turing structures in the BZAOT microemulsion system. Furthermore, we validate this effective medium theory by simulations of wave propagation in discrete heterogeneous bistable and excitable media. We find that the approach fails if the heterogeneous medium is near a percolation threshold. For a simple discrete heterogeneous model of cardiac tissue, complex fractionated dynamics and reentrant dynamics appears in such a situation.
Work done with Sergio Alonso, Raymond Kapral and Karin John.


SASW08 
16th March 2012 14:00 to 14:50 
Wavenumber selection in closed reactiondiffusion systems
Motivated by the plethora of patterns observed in precipitation experiments starting with Liesegang's 1896 study, we investigate pattern formation in the wake of fronts in closed reactiondiffusion systems. We will briefly describe some models and the relation to phase separation models such as the CahnHilliard equation and the PhaseField System. We will then present results that characterize patterns formed in the wake of fronts.


SASW08 
16th March 2012 15:10 to 15:30 
Epileptic Seizure Dynamics as a Selforganised Spatiotemporal Pattern
Since their original conception as manifestations of electric brain activity by Hughlings Jackson, epileptic seizures have been considered an example of a pathology that is best described by a complex spatiotemporal pattern. Nevertheless, their understanding in terms of nonlinear dynamics is still surprisingly limited. In particular, the transition into and out of seizure dynamics is widely regarded to be due to specific parameter changes into and out of a region of periodic solutions, although these changes have never been pinned down to actual physiological observables.
Here we present a modelling framework for spatiotemporal epileptic dynamics in humans which builds on the notion of neural mass excitability of a generic cortical circuit. We justify the components of the model by comparison to experimental (animal) and clinical (human) data and study potential mechanisms underlying generalised and partial seizures. We find that, in addition to the dynamics provided by periodic attractors, spatiotemporal epileptic rhythms could also be explained by intermittency (spontaneous switching), and complex rhythmic transients following perturbations. We discuss these concepts using different clinical seizure types. Finally, we use the model framework to propose practical stimulation protocols to test for the presence of regions of abnormality ("epileptic foci") in the human brain.
M. Goodfellow, K. Schindler, G. Baier, NeuroImage 55, 920932 (2011); M. Goodfellow, K. Schindler, G. Baier, NeuroImage (2011), doi:10.1016/j.neuroimage.2011.08.060. P. Taylor, G. Baier, J. Comput. Neurosci. (2011), in print. Online: DOI 10.1007/s1082701103321.


SASW08 
16th March 2012 15:30 to 15:50 
Spatial Pattern Formation in Phytoplankton Dynamics in 1D and 2D System
In this paper, we propose a mathematical model of infected phytoplankton dynamics with spatial movement. The reaction diffusion models in both one and two dimension space coordinates are studied. The proposed model is an extension of temporal model available [6], in spatiotemporal domain. It is observed that the reaction diffusion system exhibits spatiotemporal chaos in phytoplankton dynamics. The importantance of the spatially extension are established in this paper, as they display a wide spectrum of ecologically relevant behavior, including chaos. The stability of the system is studied with respect to disease contact rate and the growth fraction of infected phytoplankton indirectly rejoins the susceptible phytoplankton population. The results of numerical experiments in one dimension and two dimensions in space as well as time series in temporal models are presented using MATLAB simulation. Moreover, the stability of the corresponding temporal model is studied analytically . Finally, the comparisons of the three types of numerical experimentation are discussed in conclusion. Keywords: Reactiondiffusion equation, phytoplankton dynamics, Spatiotemporal pattern formation, Chaos, Local Stability.


SASW08 
16th March 2012 15:50 to 16:10 
LD Weise 
Spiral wave initiation in the ReactionDiffusionMechanics models
We introduce a discrete reactiondiffusionmechanics (dRDM) model to study the effects of deformation on reactiondiffusion (RD) processes.
The dRDM framework employs a FitzHughNagumo type RD model coupled to a masslattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material).
Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the masslattice system. Using this framework we study and find new mechanisms of spiral wave initiation in the contracting excitable medium in homogeneous and heterogeneous cases. In particular, we show that deformation alters the "classical," and forms a new vulnerable zone at longer coupling intervals. This mechanically caused vulnerable zone results in a new mechanism of spiral wave initiation, where unidirectional conduction block and rotation directions of the consequently initiated spiral waves are opposite compared to the mechanism of spiral wave initiation due to the "classical vulnerable zone." We also study and classify mechanisms of spiral wave initiation in excitable tissue with heterogeneity in passive and in active mechanical properties.


SASW08 
16th March 2012 16:10 to 16:30 
B Vasiev & N Harrison & R Diez del Corral & C Weijer 
Chemotaxis and morphogen dynamics in biological tissues
In developmental biology patterns formed by morphogens are often affected by movement of cells producing the morphogens. The mutual effects of cell movement and dynamics of concentration patterns are enhanced when the movement is due to chemotactic response to the morphogens. Here we present a set of cell movement patterns with associated patterns in concentration fields of chemotactic agents obtained analytically in continuous model and numerically in individualcell based model. We have found that group of cells can push itself to move, provided that it produces a chemical which acts as a chemorepellent to its constituent cells. Also the group of cells can be pulled to move by the chemoattractor produced by the surrounding cells in a tissue. Many other chemotactic scenarios are in play when the group of cells is inhomogeneous, i.e. when only part of cells is reacting chemotactically to the morphogen produced by the other part or in the surrounding tissue. We demonstrate the se scenarios on the models of primitive streak extension and regression in the chick embryo.


SASW08 
16th March 2012 16:50 to 17:40 
P De Kepper & I Szalai & D Cuiñas & J Horváth 
The design of Turing patterns in solution chemistry
Twentytwo years ago, the first experimental observation of the stationary symmetry breaking reactiondiffusion patterns, predicted by Alan Turing, was made. It boosted theoretical and experimental studies in this field. Though a considerable variety of patterns had been found after that first observation, the number of isothermal reactions producing such patterns was limited to only two for fifteen years. Recently, we proposed an effective method for producing stationary and nonstationary symmetry breaking patterns in open spatial reactors. In the last three years, stationary reactiondiffusion patterns have been found in four new reactions. Among these, two are for the first time not based on oxihalogen chemistry. We shall briefly present this design method and give an overview of experimental contributions in the field.


SASW08 
16th March 2012 17:50 to 18:40 
H Othmer 
The Effect of the Signaling Scheme on the Robustness of Pattern Formation in Development
Pattern formation in development is a complex process that involves spatiallydistributed signals called morphogens that influence gene expression and thus the phenotypic identity of cells. Usually different cell types are spatially segregated, and the boundary between them may be determined by a threshold value of some state variable. The question arises as to how sensitive the location of such a boundary is to variations in properties, such as parameter values, that characterize the system. In this talk we discuss recent work on both deterministic and stochastic reactiondiffusion models of pattern formation with a view toward understanding how the signaling scheme used for patterning affects the variability of boundary determination between cell types in a developing tissue.


SAS 
20th March 2012 16:00 to 16:30 
Proof complexity generators  
SAS 
20th March 2012 16:30 to 17:00 
M Arslanov  Definable relations in the Turing degree structures  
SAS 
22nd March 2012 13:30 to 15:00 
The relative consistency of the axiom of choice mechanized using Isabelle/ZF
The proof of the relative consistency of the axiom of choice has been mechanized using Isabelle/ZF, building on a previous mechanization of the reflection theorem. The heavy reliance on metatheory in the original proof makes the formalization unusually long, and not entirely
satisfactory: two parts of the proof do not fit together. It seems impossible to solve these problems without formalizing the metatheory.
However, the present development follows a standard textbook, Kenneth Kunen's Set theory: an introduction to independence proofs, and could support the formalization of further material from that book. It also serves as an example of what to expect when deep mathematics is formalized.


SAS 
22nd March 2012 16:00 to 16:30 
Infinite chess: the mateinn problem is decidable and the omegaone of chess  
SAS 
22nd March 2012 16:30 to 17:00 
R Soare  The Art of Classical Computability: Why Turing and Not Church?  
SASW01 
26th March 2012 09:30 to 10:30 
Complexity of computations and proofs and pseudofinite structures
Problems to establish lower bounds for circuit size or for lengths of propositional proofs can be formulated as problems to construct expansions of pseudofinite structures.
I will explain this relation, give a few examples, and discuss some recent work aimed at proof complexity.


SASW01 
26th March 2012 11:00 to 11:30 
N Galesi 
Proof Complexity of ParisHarrington Tautologies
We study the proof complexity of ParisHarrington’s Large Ramsey Theorem for bicolorings of graphs. We prove a conditional lower bound in Resolution and a upper bound in boundeddepth Frege. The lower bound is conditional on a (very reasonable) hardness assumption for a weak (quasipolynomial) Pigeonhole principle in Res(2). We show that under such assumption, there is no refutation of the ParisHarrington formulas of size quasipolynomial in the number of propositional variables. The proof technique for the lower bound extends the idea of using a combinatorial principle to blowup a counterexample for another combinatorial principle beyond the threshold of inconsistency. A strong link with the proof complexity of an unbalanced Ramsey principle for triangles is established. This is obtained by adapting some constructions due to Erdo ̋s and Mills.


SASW01 
26th March 2012 11:30 to 12:30 
S Kreutzer 
Logical Methods in the Complexity Analysis of Graph Algorithms
A classical observation in complexity theory is that many common algorithmic problems on graphs are hard to solve. Consequently, much work has gone into studying restricted classes of admissible inputs on which tractability results can be retained. A particular rich source of structural properties which guarantee the existence of efficient algorithms for many problems on graphs come from structural graph theory, especially algorithmic graph minor theory. It has been found that most generally hard problems become tractable on graph classes of bounded treewidth and many remain tractable on planar graphs or graph classes excluding a fixed minor.
Besides many specific results giving algorithms for individual problems, of particular interest are results that establish tractability of a large class of problems on specific classes of instances. These results come in various flavours. In this tutorial we will focus on results that take a descriptive approach, i.e. results that use a logic to describe algorithmic problems and then establish general tractability results for all problems definable in that logic on specific classes of inputs. These results are usually referred to as algorithmic metatheorems.
In some sense the first theorem of this form is Courcelle's famous result that all monadic secondorder definable problems are solvable in linear time on all classes of structures of bounded treewidth. Subsequently many further theorems of this form and many tools for obtaining such results have been developed.
In this tutorial we will present the main methods and results obtained in this area. In the first part, the focus will be on logical tools that can be used to obtain algorithmic metatheorems.
In the second part of the tutorial we will focus on methods to establish corresponding lower bounds, i.e. intractability results for particular logics on classes of graphs that do not have any of the nice properties that make algorithmic metatheorems possible. In particular, we will present a reasonably tight lower bound for Courcelle's theorem mentioned above.


SASW01 
26th March 2012 14:00 to 15:00 
S Dantchev 
Parameterised Proof Complexity
I will start by introducing the basic notion of a parameterised proof as defined by B. Martin, S. Szeider and myself back in 2007, and will discuss a complexity gap theorem theorem for generic parameterisation of treelike resolution. I will then move onto results that have been obtained since them by various researchers, including parameterised lower bounds for the pigeonhole principle in resolution and for the maximum clique in random graphs in treelike resolution. I will conclude with some new results due to B. Martin and myself on
parameterised proofs for W[1] as well as with some open problems.


SASW01 
26th March 2012 15:00 to 15:30 
Parameterized Complexity of DPLL Search Procedures
We study the performance of DPLL algorithms on parameterized problems. In particular, we investigate how difficult it is to decide whether small solutions exist for satisfiability and other combinatorial problems. For this purpose we develop a ProverDelayer game which models the running time of DPLL procedures and we establish an informationtheoretic method to obtain lower bounds to the running time of parameterized DPLL procedures. We illustrate this technique by showing lower bounds to the parameterized pigeonhole principle and to the ordering principle. As our main application we study the DPLL procedure for the problem of deciding whether a graph has a small clique. We show that proving the absence of a kclique requires $n^{\Omega(k)}$ steps for a nontrivial distribution of graphs close to the critical threshold. For the restricted case of treelike Parameterized Resolution, this result answers a question asked in [BGLR11] of understanding the Resolution complexity of this family of formulas.


SASW01 
26th March 2012 16:00 to 16:30 
Model Checking for Modal Intuitionistic Dependence Logic
In this paper we consider the complexity of model checking for modal intuitionistic dependence logic (MIDL). MIDL is a natural variant of firstorder dependence logic (D), which was introduced by Väänänen (2007), as a new approach to independencefriendly logic (Hintikka, Sandu, 1989). Sentences of D have exactly the same expressive power as sentences of existential secondorder logic (Väänänen 2007, c.f. Enderton, 1970 and Walkoe, 1970). The compositional semantics of D is team semantics, originally developed by Hodges (1997) for independencefriendly logic. Abramsky and Väänänen (2009) studied Hodges’ construction in a more general context and introduced BIDlogic, which extends dependence and includes intuitionistic implication, Boolean disjunction, as well as linear implication. It was shown that the intuitionistic fragment of BIDlogic, called intuitionistic dependence logic, has exactly the same expressive power as the full secondorder logic, on the level of sentences (Yang, 2010).
The modal version of D, modal dependence logic (MDL) was defined by Väänänen (2008). A natural variant of MDL is modal intuitionistic dependence logic, where the intuitionistic implication and Boolean disjunction are added into the setting. In this paper we show that the model checking problem for MIDL in general is PSPACEcomplete. Furthermore, we consider fragments of MIDL built by restricting the operators allowed in the logic. It turns out that apart from known NPcomplete as well as tractable fragments there also are some coNPcomplete fragments, e.g. propositional intuitionistic dependence logic.


SASW01 
26th March 2012 16:30 to 17:30 
Complexity results for dependence logic
Dependence Logic, introduced by Jouko Väänänen in 2007, is a new logic incorporating the concept of dependence into firstorder logic. The expressive power of dependence logic coincides with that of existential secondorder logic, and the complexity class NP over finite structures. In the past few years, dependence logic has grown into a new framework in which various notions of dependence and independence can be formalized. We review
recent results regarding the expressive power and complexity of certain fragments and extensions of dependence logic.


SASW01 
27th March 2012 09:00 to 10:00 
A Atserias 
Indistinguishability in Counting Logics and the Complexity of SemiAlgebraic Proofs
The recent connection between the concept of indistinguishability by the properties expressible in a certain formal language and a relaxation of structural isomorphism through linear programming brings the areas of descriptive complexity and propositional proof complexity a little bit closer together. In this talk I will overview this connection making emphasis on the questions it has answered, but also on the many new exciting questions that it raises.


SASW01 
27th March 2012 10:00 to 10:30 
The hierarchy of equivalence relations on the natural numbers under computable reducibility
Many of the naturally arising equivalence relations in mathematics, such as isomorphism relations on various types of countable structures, turn out to be equivalence relations on a standard Borel space, and these relations form an intensely studied hierarchy under Borel reducibility. The topic of this talk is to introduce and explore the computable analogue of this robust theory, by studying the corresponding hierarchy of equivalence relations on the natural numbers under computable reducibility. Specifically, one relation E is computably reducible to another, F , if there is a unary computable function f such that x E y if and only if f(x) F f(y) . This gives rise to a very different hierarchy than the Turing degrees on such relations, since it is connected with the difficulty of the corresponding classification problems, rather than with the difficulty of computing the relations themselves. The theory is well suited for an analysis of equivalence relations on classes of c.e. structures, a rich context with many natural examples, such as the isomorphism relation on c.e. graphs or on computably presented groups. An abundance of open questions remain, and the subject is an attractive mix of methods from mathematical logic, computability, set theory, particularly descriptive set theory, and the rest of mathematics, subjects in which many of the equivalence relations arise. This is joint work with Sam Coskey and Russell Miller.


SASW01 
27th March 2012 11:00 to 11:30 
Ordinal Strength of LogicEnriched Type Theories
Type theories are formal languages that can be read as either a programming language or a system of logic, via the propositionsastypes or proofsasprograms paradigm. Their syntax and metatheory is quite different in character to that of "orthodox logics" (the familiar firstorder logics, secondorder logics, etc). So far, it has been difficult to relate type theories to the more orthodox systems of firstorder logic, secondorder logic, etc.
Logicenriched type theories (LTTs) may help with this problem. An LTT is a hybrid system consisting of a type theory (for defining mathematical objects) and a separate logical component (for reasoning about those objects). It is often possible to translate between a type theory and an orthodox logic using an LTT as an intermediate system, when finding a direct translation would be very difficult.
In this talk, I shall summarise the work so far on the proof theory of type theories, including Anton Setzer's work on ordinal strength. I shall show how LTTs allow results about type theories to be applied to orthodox logics, and vice versa. This will include a recently discovered, elementary proof of the conservativity of ACA0 over PA. I conclude by giving two new results: typetheoretic analogues of the classic results that P corresponds to firstorder least fixed point logic, and NP to secondorder existential logic.


SASW01 
27th March 2012 11:30 to 12:30 
S Kreutzer 
Logical Methods in the Complexity Analysis of Graph Algorithms II
A classical observation in complexity theory is that many common algorithmic problems on graphs are hard to solve. Consequently, much work has gone into studying restricted classes of admissible inputs on which tractability results can be retained. A particular rich source of structural properties which guarantee the existence of efficient algorithms for many problems on graphs come from structural graph theory, especially algorithmic graph minor theory. It has been found that most generally hard problems become tractable on graph classes of bounded treewidth and many remain tractable on planar graphs or graph classes excluding a fixed minor.
Besides many specific results giving algorithms for individual problems, of particular interest are results that establish tractability of a large class of problems on specific classes of instances. These results come in various flavours. In this tutorial we will focus on results that take a descriptive approach, i.e. results that use a logic to describe algorithmic problems and then establish general tractability results for all problems definable in that logic on specific classes of inputs. These results are usually referred to as algorithmic metatheorems.
In some sense the first theorem of this form is Courcelle's famous result that all monadic secondorder definable problems are solvable in linear time on all classes of structures of bounded treewidth. Subsequently many further theorems of this form and many tools for obtaining such results have been developed.
In this tutorial we will present the main methods and results obtained in this area. In the first part, the focus will be on logical tools that can be used to obtain algorithmic metatheorems.
In the second part of the tutorial we will focus on methods to establish corresponding lower bounds, i.e. intractability results for particular logics on classes of graphs that do not have any of the nice properties that make algorithmic metatheorems possible. In particular, we will present a reasonably tight lower bound for Courcelle's theorem mentioned above.


SASW01 
27th March 2012 14:00 to 15:00 
The complexity of enumeration and counting for acyclic conjunctive queries
Enumerating all the solutions of a problem or counting the number of these solutions are two related algorithmic tasks. Complexity measures, however, are of a different nature. For enumeration problem, for example, one of the main notion is that of delay between generated solutions. A problem is considered as tractable if one can enumerate the solutions one by one without repetition with a "reasonable" delay (e.g. polynomial, linear or constant time) between two solutions.
In this talk, we will make a tour on the complexity of enumeration and
(weighted) counting for acyclic conjunctive queries (ACQ), a wellknown tractable fragment (for decision) of conjunctive queries.
We first show that there is a dichotomy for enumeration and, up to some reasonable complexity assumption, such queries are either enumerable with a linear or with a "constant" delay. Hence, in all cases, enumeration is tractable.
For weighted counting, the situation is more complex. The unweighted quantifier free version of this problem is known to be tractable (for combined complexity), but it is also known that introducing even a single quantified variable makes it #Phard. By introducing some polynomial representation of queries, we first show that weighted counting for quantifierfree ACQ is still tractable and that even minimalistic extensions of the problem lead to hard cases. We then introduce a new parameter for quantified queries that permits to isolate large island of tractability. We show that, up to a standard assumption from parameterized complexity, this parameter fully characterizes tractable subclasses for counting weighted solutions of ACQ queries. This completely determine the tractability frontier for weighted counting in this case.


SASW01 
27th March 2012 15:00 to 15:30 
Bounded degree and planar spectra
There are many problems in finite model theory about which we know a lot in the unrestricted classes, but which are still not thoroughly researched in the case where we restrict the class of considered models (for example, in terms of properties of their Gaifman graphs). In this talk we consider the problem of spectra of formulae. A set of integers S is a spectrum of phi if n \in S iff phi has a model of size n. It is well known that S is a spectrum of some formula iff the problem of deciding whether n \in S is in NP when n is given in unary (equivalently, in NE when n is given in binary).
Restricting the class of models we can get, for example, bounded degree spectra (S is a bounded degree spectrum of phi iff phi has a bounded degree model of size n), weak planar spectra (S is a bounded degree spectrum of phi iff phi has a planar model of size n), and forced planar spectra (S is a spectrum of phi which admits only planar models).
We provide the complexity theoretic characterizations for these cases, similar to the one above. In case of bounded degree spectra, there is a very small
(polylogarythmic) gap between our lower and upper bound. In case of weak planar spectra the gap is polynomial. We also provide a weaker complexity theoretic characterization of forced planar spectra.


SASW01 
27th March 2012 16:00 to 16:30 
SAT Solving: Present and Future
SAT solving experienced exciting developments, especially in the last ten years. It seems fair to say that certain areas of industry, like EDA (Electronic Design Automation) would not be able perform at the current level without "SAT technology".
In the first part of my talk I will give some short overview on this development. Then, in the second part, I want to discuss directions for the future.


SASW01 
27th March 2012 16:30 to 17:00 
Phase Transitions and Computational Complexity  
SASW01 
27th March 2012 17:30 to 18:30 
A Hodges 
Alan Turing: the creative power of mathematics
Nowadays it is widely acknowledged that Turing's 1936 definition of computability, and his discovery of the concept of the universal machine, provided the foundation for the emergence of the digital computer in 1945. But Turing was not simply a logician. In this talk, I shall bring out how Turing's youthful 1936 work arose from a wide field of enquiry with mathematical, physical and philosophical elements. Then, that Turing's broad approach to mathematics and technology led him through the wartime cryptographic work into his own electronic computer plan of 1945. This extraordinary breadth of knowledge and application also created the setting for his later Artificial Intelligence plans and for his theory of biological growth.


SASW01 
28th March 2012 09:00 to 10:00 
R Santhanam 
Strong Lower Bounds for ConstantDepth Frege Imply Lower Bounds for Frege
I will discuss a recent result proved with Yuval Filmus and Toni Pitassi: exponential size lower bounds for constantdepth Frege imply superpolynomial size lower bounds for Frege. The proof is constructive in that it shows how to simulate polynomial size Frege proofs for any sequence of tautologies with subexponential size proofs in constantdepth Frege. The simulation is tight for treelike proofs. I will also mention consequences of the result for weak automatizability of constantdepth Frege.


SASW01 
28th March 2012 10:00 to 10:30 
Separations between propositional proof systems related to clause learning
Resolution trees with lemmas (RTL) are a resolutionbased propositional proof system that is related to the DLL algorithm with clause learning, with its fragments RTL(k) being related to clause learning algorithms where the width of learned clauses is bounded by k.
For every k up to O(log n), an exponential separation between the proof systems RTL(k) and RTL(k+1) is shown, indicating that a minimal increase in the width of learned clauses can lead to an exponential speedup for clause learning algorithms.


SASW01 
28th March 2012 11:00 to 11:30 
Towards a New Theory of Bounded Arithmetic for PSPACE computations
I will present a second order extension of wellknown first order theories of bounded arithmetic. The purpose of this work is to find a new prooftheoretic characterisation of the polynomialspace computable functions in terms of bounded arithmetic. This is a joint work with Toshiyasu Arai (Chiba University, Japan) in progress.


SASW01 
28th March 2012 11:30 to 12:30 
NP search problems: Complexity and reducibilities
This is the first of two tutorial talks on total NP search problems. Full abstract not yet available.


SAS 
28th March 2012 14:00 to 16:00 
M Cramer & P Koepke & B Schroeder 
Interest Group in Formal Mathematics: Natural Language Proof Checking  The Naproche System
We shall present the Naproche Proof Checking System which accepts mathematical text formulated in a controlled natural language close to the mathematical vernacular. The project involves work on several levels: linguistics for processing the input language, logic for providing natural proof methods, and mathematics for the formalization process and background theories. We shall shall discuss some "natural" example texts acceptable by the System.


SASW01 
29th March 2012 09:00 to 10:00 
Y Chen 
From almost optimal algorithms to logics for complexity classes via listings and a halting problem
Let $C$ denote one of the complexity classes ``polynomial time,'' ``logspace,'' or ``nondeterministic logspace.'' We introduce a logic $L(C)_{\mathrm{inv}}$ and show generalizations and variants of the equivalence ($L(C)_{\mathrm{inv}}$ captures $C$ if and only if there is an almost $C$optimal algorithm in $C$ for the set TAUT of tautologies of propositional logic.) These statements are also equivalent to the existence of a listing of subsets in $C$ of TAUT by corresponding Turing machines and equivalent to the fact that a certain parameterized halting problem is in the parameterized complexity class $\mathrm{XC}_{\rm uni}$.


SASW01 
29th March 2012 10:00 to 10:30 
M Lauria 
Space Complexity of Polynomial Calculus (joint work with Yuval Filmus, Jakob Nordstrom, Neil Thapen, Noga Zewi)
During the last decade, an active line of research in proof complexity has been to study space complexity and timespace tradeoffs for proofs. Besides being a natural complexity measure of intrinsic interest, space is also an important issue in SAT solving, and so research has mostly focused on weak systems that are used by SAT solvers.
There has been a relatively long sequence of papers on space in resolution and resolutionbased proof systems, and it is probably fair to say that resolution is reasonably well understood from this point of view. For other natural candidates to study, however, such as polynomial calculus or cutting planes, very little has been known. We are not aware of any nontrivial space lower bounds for cutting planes, and for polynomial calculus the only lower bound has been for CNF formulas of unbounded width in [Alekhnovich et al.'02], where the space lower bound is smaller than the initial width of the clauses in the formulas. Thus, in particular, it has been consistent with
current knowledge that polynomial calculus could be able to refute any kCNF formula in constant space.
In this paper, we prove several new results on space in polynomial calculus (PC), and in the extended proof system polynomial calculus resolution (PCR) studied in [Alekhnovich et al.'02]:
1. We prove an Omega(n) space lower bound in PC for the canonical 3CNF
version of the pigeonhole principle formulas PHP^m_n with m
pigeons and n holes, and show that this is tight.
2. For PCR, we prove an Omega(n) space lower bound for a bitwise
encoding of the functional pigeonhole principle with m pigeons and
n holes. These formulas have width O(log(n)), and so this is an
exponential improvement over [Alekhnovich et al.'02] measured in
the width of the formulas.
3. We then present another encoding of a version of the pigeonhole
principle that has constant width, and prove an Omega(n) space lower
bound in PCR for these formulas as well.
4. Finally, we prove that any kCNF formula can be refuted in PC in
simultaneous exponential size and linear space (which holds for
resolution and thus for PCR, but was not obviously the case for
PC). We also characterize a natural class of CNF formulas for
which the space complexity in resolution and PCR does not change
when the formula is transformed into a 3CNF in the canonical way,
something that we believe can be useful when proving PCR space
lower bounds for other wellstudied formula families in proof
complexity.


SASW01 
29th March 2012 11:00 to 11:30 
Toda's theorem in bounded arithmetic with parity quantifiers and bounded depth proof systems with parity gates
The "first part" of Toda's theorem states that every language in the polynomial hierarchy is probabilistically reducible to a language in $\oplus P$. The result also holds for the closure of the polynomial hierarchy under a parity quantifier.
We use Jerabek's framework for approximate counting to show that this part of Toda's theorem is provable in a relatively weak fragment of bounded arithmetic with a parity quantifier. We discuss the significance of the relativized version of this result for bounded depth propositional proof systems with parity gates.
Joint work with Sam Buss and Konrad Zdanowski.


SASW01 
29th March 2012 11:30 to 12:30 
NP search problems: Provability and reducibilities
This is the second of two tutorial talks on total NP search problems. Full abstract not yet available.


SASW01 
29th March 2012 14:00 to 15:00 
Descriptive complexity of linear algebra
An important question that has motivated much work in finite model theory is that of finding a logical characterisation of polynomialtime computability. That is to say, to find a logic in which a class of finite structures is expressible if, and only if, membership in the class is decidable in deterministic polynomial time (PTIME).
Much of the research in this area has focused on the extension of inflationary fixedpoint logic by counting terms (FPC). This logic has been shown to capture polynomial time on many natural classes of structures, including all classes of graphs with excluded minors. At the same time, it is also known that FPC is not expressive enough to capture polynomial time on the class of all finite graphs. Noting that the various examples of properties in PTIME that are not definable in FPC can be reduced to testing the solvability of systems of linear equations, Dawar et al. (2009) introduced the extension of fixedpoint logic with operators for expressing matrix rank over finite fields (FPR). Fixedpoint rank logic strictly extends the expressive power of FPC while still being contained in PTIME and it remains an open question whether there are polynomialtime properties that are not definable in this logic.
In this talk I give an overview of the main results in the logical study of linear algebra and present new developments in this area. After reviewing the background to this study, I define logics with rank operators and discuss some of the properties that can be expressed with such logics. In order to delimit the expressive power of FPR, I present variations of EhrenfeuchtFraïssé games that are suitable for proving nondefinability in finitevariable rank logics. Using the rank games, I show that if we restrict to rank operators of arity two, then there are graph properties that can be defined by firstorder logic with rank over GF(p) that are not expressible by any sentence of fixedpoint logic with rank over GF(q), for all distinct primes p and q. Finally, I discuss how we can suitably restrict these rank games to get an algebraic game equivalence that can be decided in polynomial time on all classes of finite structures. As an application, this gives a family of polynomialtime approximations of graph isomorphism that is strictly stronger than the wellknown WeisfeilerLehman method.


SASW01 
29th March 2012 15:00 to 15:30 
Definability of linear equation systems over groups and rings
(joint work with A. Dawar, E. Grädel, B. Holm, E. Kopczynski)
One of the major open question in finite model theory is whether there is a logic for PTIME. As one promising candidate, fixedpoint logic with counting, FPC, has been studied extensively, and indeed, FPC has been shown to capture PTIME on important classes of structures.
Although Cai, Fürer and Immerman ruled out FPC for the general case already in 1992, it was only in 2007 that Atserias et. al [1] found a class of natural problems explaining the major shortcoming of FPC. Specifically, they proved that the important problem of solving linear equation systems (SLES) over finite Abelian groups cannot be expressed in FPC; moreover, all other known queries separating FPC from PTIME turned out to be reducible to SLES via simple logical reductions. Hence, problems from algebra provide a new source of operators which have polynomialtime data complexity and which might be used to strictly extend the expressive power of FPC (cf. the notion of rank logics [2]).
Motivated by these insights, we study SLES over various classes of finite groups and rings from the viewpoint of logical (inter)definability. All problems that we consider are decidable in polynomial time, but not expressible in FPC. Based on the structure theory of finite rings, we prove that on important classes of rings, SLES can be reduced to SLES over cyclic groups, which constitute the most basic class of tractable domains for SLES. Furthermore, we prove closure properties for classes of queries that reduce to SLES over fixed rings. As one application, these closure properties provide normal forms for logics extended with solvability operators.
[1] Albert Atserias, Andrei A. Bulatov, and Anuj Dawar. Affine systems of equations and counting infinitary logic. Theor. Comput. Sci., 2009.
[2] Anuj Dawar, Martin Grohe, Bjarki Holm, and Bastian Laubner. Logics with rank operators. In LICS ’09: Proceedings of the 2009 24th Annual IEEE Symposium on Logic In Computer Science, 2009


SASW01 
29th March 2012 16:00 to 16:30 
Proof complexity of circuit lower bounds
Techniques that could resolve P vs. NP are considerably restricted by wellknown barriers in computational complexity. There are several corresponding results in logic stating that certain fragments of arithmetic are not sufficiently strong to prove that P differs from NP or some similar conjectures. We investigate possible extensions of these barriers to stronger
theories. Mainly, Razborov's conjecture about hardness of NisanWigderson generators for Extended Frege systems and natural proofs in proof systems admitting feasible disjunction property pointed out by Rudich.


SASW01 
29th March 2012 16:30 to 17:30 
Developing logical theories for several NP search classes
We present finitelyaxiomatizable twosorted firstorder theories associated with several NP search classes in the sense that search problems in a class are precisely those that are provable total in the associated theory. These theories are in the same vein as the family of theories associated with many subclasses of polytime, developed earlier by Nguyen and Cook. We consider proving several theorems in these new theories. (Joint work with Antonina Kolokolova.)


SASW01 
30th March 2012 09:00 to 10:00 
Fragments of approximate counting
We pose the question: are there any simple sentences, provable using bounded induction, that cannot already be proved using approximate counting of simple sets? More precisely, we study the longstanding open problem of giving $\forall \Sigma^b_1$ separations for fragments of bounded arithmetic in the relativized setting, but rather than considering the usual fragments defined by the amount of induction they allow, we study Jerabek's theories for approximate counting and their subtheories. We prove separations for some of the subtheories, and also give new propositional translations, in terms of random resolution refutations, for the consequences of $T^1_2$ augmented with a certain weak pigeonhole principle. Joint work with Sam Buss and Leszek Kolodziejczyk.


SASW01 
30th March 2012 10:00 to 10:30 
Root finding in TC^0 and open induction
It is known that elementary arithmetic operations are computable in uniform TC^0, and some (multiplication, division) are even complete for this complexity class. The corresponding theory of bounded arithmetic, VTC^0, duly defines addition, multiplication, and ordering, and proves that integers form a discretely ordered ring under these operations. It is a natural question what other firstorder properties of elementary arithmetic operations are provable in VTC^0.
In particular, we are interested whether VTC^0 (or a theory of similar
strength) can prove open induction in the basic language of arithmetic (Shepherdson's theory IOpen). This turns out equivalent to the problem whether there are TC^0 rootfinding algorithms for constantdegree polynomials whose soundness is provable in VTC^0. In this talk, we will establish that such rootfinding algorithms exist in the real (or rather, complex) world, and we will discuss the prospects of their formalization in bounded arithmetic.


SASW01 
30th March 2012 11:00 to 11:30 
Infinitary methods in finite model theory
The accepted wisdom is that standard techniques from classical model theory fail to apply in the finite. We attempt to dispel this notion by presenting new proofs of the Gaifman and Hanf locality theorems, as they appear in Libkin's textbook on finite model theory. In particular, using compactness over an expanded vocabulary, we obtain strikingly simple arguments that apply over both finite and infinite structures  all without the complexity of Ehrenfeucht–Fraïssé games normally used. Our techniques rely on internalizing most of the relevant mathematical features into the firstorder theory itself. It remains to be seen whether these methods can be extended to proving orderinvariant locality.


SASW01 
30th March 2012 11:30 to 12:30 
Locality from circuit lower bounds
We study the locality of an extension of firstorder logic that captures graph queries computable in AC0, i.e., by families of polynomialsize constantdepth circuits. The extension considers firstorder formulas over finite relational structures which may use arbitrary numerical predicates in such a way that their truth value is independent of the particular interpretation of the numerical predicates. We refer to such formulas as Arbinvariant FO.
In this talk I will show how to use circuit lower bounds for proving that Arbinvariant FO queries are Gaifmanlocal in the following sense: They cannot distinguish between two tuples that have the same neighborhood up to distance (log n)^c, where n represents the number of elements in the structure and c is a constant depending on the query.


SAS 
3rd April 2012 16:00 to 16:30 
The Complexity of Query Answering in Inconsistent Databases  
SAS 
3rd April 2012 16:30 to 17:00 
Propositional calculus and the nature of reality  
SAS 
3rd April 2012 17:00 to 17:30 
D Pattinson  Reasoning in nonwellfounded logics  
SAS 
5th April 2012 16:00 to 16:30 
Disjoint NPpairs, automatizability and games  
SASW02 
10th April 2012 10:00 to 11:00 
Verifying Crypto Protocols Written in C
The security of much critical infrastructure depends in part on cryptographic software coded in C, and yet vulnerabilities continue to be discovered in such software. We describe recent progress on checking the security of C code implementing cryptographic software. In particular, we describe projects that combine verificationcondition generation and symbolic execution techniques for C, with methods for stating and verifying security properties of abstract models of cryptographic protocols. We illustrate these techniques on C code for a simple twomessage protocol.


SASW02 
10th April 2012 11:30 to 12:30 
Multikey Homomorphic Encryption and Applications
I will present a new notion of fully homomorphic encryption (FHE) called multikey FHE that permits computation on data encrypted under multiple unrelated keys, a new construction of multikey FHE based on the NTRU encryption scheme, and a new application of multikey FHE to the problem of constructing minimally interactive cloudassisted secure multiparty computation protocols. Joint Work with Adriana LopezAlt (NYU) and Eran Tromer (TelAviv U).


SASW02 
10th April 2012 13:30 to 14:30 
Compositionality of gamebased secure keyexchange
In this talk I will present two approaches towards modular security proofs for the composition of key exchange protocols with arbitrary other cryptographic tasks. While the two approaches differ in their scope and applicability they both rely on gamebased security security and avoids the idiosyncrasies of simulationbased definitions. As an application, I will discuss how the results can be applied to proving the security of the Transport Layer Security protocol.


SASW02 
10th April 2012 14:30 to 15:30 
Rational Proofs
We unify the treatment of asymmetry of information in theoretical computer science and economics.
We put forward a new type of proof system, where an unbounded Prover and a polynomial time Verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). In our setting there no longer are “good” or “malicious” Provers, but only RATIONAL ones. In essence, (1) the Verifier gives the Prover a reward in [0,1] determined by the transcript of their interaction; (2) the Prover wishes to maximize his expected reward; and (3) the reward is maximized only if the verifier correctly learns f(x).
Rational proofs are as powerful as interactive ones, but can be amazingly more efficient in the amount of communication involved: that is, the number of bits exchanged and the number of rounds of interaction.
Joint work with Pablo Azar.


SASW02 
10th April 2012 16:00 to 17:00 
Constructive cryptography
I will present constructive cryptography, a new paradigm for defining and proving security of cryptographic protocols. Constructive cryptography differs from previous approaches both in terms of the types of statements made as well as in terms of the abstraction level at which they are proved.


SASW02 
11th April 2012 09:00 to 10:00 
P Rogaway 
Foundations of Garbled Circuits
Garbled circuits, a classical idea rooted in the work of A. Yao, have generally been understood as a cryptographic *technique*, not a cryptographic *goal*. Here we treat garbled circuits as a proper cryptographic primitive, giving a syntax for a "garbling scheme" and formalizing several security notions for such schemes. The most basic of our notions, "privacy", suffices for the classical goals of twoparty secure function evaluation (SFE) and private function evaluation (PFE). We provide a simple and efficient garbling scheme achieving privacy, this built from a block cipher, and we analyze its concrete security. We next consider the "authenticity" and "obliviousness" of a garbling scheme, extending the blockcipherbased protocol to achieve these ends, too. Our treatment of garbling schemes solidifies notions that have been swirling around the literature for years, and promises a more modular approach to designing and using garbling schemes in the future.


SASW02 
11th April 2012 10:00 to 11:00 
M Fischlin 
MetaReductions
MetaReductions are a technique to show impossiblity results in reductionist cryptography. Roughly, such a metareduction M shows that there cannot exist a reduction R which turns a successful adversary against one cryptographic primitive into a successful adversary against another, hard primitive. This is shown by turning the reduction R through the metareduction M (a 'reduction against the reduction') into an algorithm solving the underlying primitive directly, without relying on the assumption of a successful adversary. Hence, either the reduction R cannot exist (if the primitive is really hard), or it is trivial (if the primitive is already easy). Unlike other separation techniques, metareductions usually work for all reductions R which treat the adversary as a blackbox, but often do not impose any restriction on the primitives in question, i.e., the primitive may not be treated as a blackbox, and the technique may thus apply to common primitives like RSA or DL. In return, all known metareductions work for specific primitives only. In this talk we survey the recent result on metareductions and shed light on the applicability of this technique.


SASW02 
11th April 2012 11:30 to 12:30 
Pairingbased succinct noninteractive zeroknowledge arguments
A noninteractive zeroknowledge argument enables a prover to convince a verifier that a statement is true without revealing any other private information. We consider succinct arguments that are smaller than the size of the statement. Succinct arguments are known in the random oracle model. The goal of this talk is to present alternative pairingbased methods that can be used instead of the random oracle model relying on different and perhaps more plausible cryptographic assumptions.


SASW02 
11th April 2012 13:30 to 14:30 
From crypto verif specifications to computationally secure implementations of protocols
CryptoVerif is a protocol verifier in the computational model, which generates proofs by sequences of games, like those written manually by cryptographers. We have implemented a compiler that automatically translates CryptoVerif specifications into implementations of protocols, in the OCaml language. The goal of this compiler is to generate implementations of security protocols proved secure in the computational model: from the same specification, we can prove it using CryptoVerif and generate the implementation using our compiler. We are currently using this framework in order to generate an implementation of SSH.


SASW02 
11th April 2012 14:30 to 15:30 
C Gentry 
Quadratic Span Programs for Succinct NIZKs without PCPs
We introduce a new characterization of the NP complexity class, called "Quadratic Span Programs" (QSPs), which is a natural extension of "span programs" by Karchmer and Wigderson. Our main motivation is the construction of succinct arguments of NPstatements that are quickly constructible and verifiable. QSPs seem wellsuited for this task, perhaps even better than Probabilistically Checkable Proofs (PCPs).
In 2010, Groth constructed a NIZK argument in the common reference string (CRS) model for CircuitSAT consisting of only 42 elements in a bilinear group. Interestingly, his argument does not (explicitly) use PCPs. But his scheme has some disadvantages  namely, the CRS size and prover computation are both quadratic in the circuit size. In 2011, Lipmaa reduced the CRS size to quasilinear, but with prover computation still quadratic.
Using QSPs we construct a NIZK argument in the CRS model for CircuitSAT consisting of just 7 group elements. The CRS size is linear in the circuit size, and prover computation is quasilinear, making our scheme seemingly quite practical. (The prover only needs to do a linear number of group operations; the quasilinear computation is a multipoint evaluation.)
Our results are complementary to those of Valiant (TCC 2008) and Bitansky et al. (2012), who use ``bootstrapping" (recursive composition) of arguments to reduce CRS size and prover and verifier computation. QSPs also provide a crisp mathematical abstraction of some of the techniques underlying Groth's and Lipmaa's constructions.
Joint work with Rosario Gennaro, Bryan Parno, and Mariana Raykova.


SASW02 
11th April 2012 16:00 to 17:00 
How to Compute in the Presence of Leakage
We address the following problem: how to execute any algorithm, for an unbounded number of executions, in the presence of an attacker who gets to observe partial information on the internal state of the computation during executions.
This general problem has beenaddressed in the last few years with varying degrees of success. It is important for running cryptographic algorithms in the presence of sidechannel attacks, as well as for running noncryptographic algorithms, such as a proprietary search algorithm or a game, on a cloud server where parts of the execution's internals might be observed.
In this work, we view algorithms as running on a leaky CPU. In each (sub)computation run on the CPU, we allow the adversary to observe the output of an arbitrary and adaptively chosen lengthbounded function on the CPU's input, output, and randomness.
Our main result is a general compiler for transforming any algorithm into one that is secure in the presence of this family of partial observation attacks (while maintaining the algorithm's functionality). This result is unconditional, it does not rely on any secure hardware components or cryptographic assumptions.


SASW02 
12th April 2012 09:00 to 10:00 
Definitions of Predicate Encryption
We discuss the recent simulationbased definitions of security for functional and predicate encryption and give some further results.


SASW02 
12th April 2012 10:00 to 11:00 
How provably secure are cryptographic primitives used in practice
We will discuss how 'provably secure' are several cryptographic primitives of practical relevance. Examples include OAEP encryption and Full Domain Hash Signatures.


SASW02 
12th April 2012 11:30 to 12:30 
D Pointcheval 
Efficient Smooth Projective Hash Functions and Applications
In 2008, Groth and Sahai proposed a powerful suite of techniques for constructing noninteractive zeroknowledge proofs in bilinear groups. Their proof systems have found numerous applications, including group signature schemes, anonymous voting, and anonymous credentials.
In this talk, we demonstrate that the notion of smooth projective hash functions is an efficient alternative for interactive protocols. We show that this approach provides suitable proofs for designing schemes that rely on standard security assumptions in the standard model with a commonreference string and are more efficient than those obtained using the GrothSahai methodology when interactions are already required by the primitive itself.
As an illustration of our design principle, we describe efficient blind signature schemes, as well as an oblivious signaturebased envelope scheme, but also the new notion of Languagebased Authenticated Key Exchange, that includes Passwordbased Authenticated Key Exchange, Secret Handshakes and Credential Authenticated Key Exchange.
This is a joint work with Olivier Blazy, Céline Chevalier and Damien Vergnaud.


SASW02 
12th April 2012 13:30 to 14:30 
LeakageResilient ZeroKnowledge Proofs and their Applications
We initiate a study of zero knowledge proof systems in the presence of sidechannel attacks. Specifically, we consider a setting where a cheating verifier is allowed to obtain arbitrary bounded leakage on the entire state (including the witness and the random coins) of the prover during the entire protocol execution. We formalize a meaningful definition of leakageresilient zero knowledge (LRZK) proof system, that intuitively guarantees that "the protocol does not yield anything beyond the validity of the statement, and whatever leakage the attacker could have obtained without the protocol ever taking place."
We give a construction of LRZK interactive proof system based on general assumptions. To the best of our knowledge, this is the first instance of a cryptographic protocol where the adversary is allowed to perform leakage attacks during the protocol execution (in contrast, prior work only focused on leakageresilient primitives such as encryption, or leakageresilient devices, or leakage prior to the protocol execution). Next, we give an LRNIZK argument system based on standard assumptions.
We will also discuss several applications of our new notion.
Joint work with Sanjam Garg and Abhishek Jain.


SASW02 
12th April 2012 14:30 to 15:30 
Where Delegation Meets Einstein
We show a curious connection between the problem of *computation
delegation* and the model of *nosignalling multiprover interactive proofs*, a model that was studied in the context of multiprover interactive proofs with provers that share quantum entanglement, and is motivated by the physical law that information cannot travel instantly (and is, like matter, limited by the speed of light).
We consider the method suggested by Aiello et. al. for converting a 1round multiprover interactive proof (MIP) into a 1round delegation scheme, by using a computational private information retrieval (PIR) scheme. On one direction, we show that if the underlying MIP protocol is sound against statistically nosignalling cheating provers then the resulting delegation protocol is secure (under the assumption that the underlying PIR is secure for attackers of subexponential size). On the other direction, we show that if the resulting delegation protocol is secure for every PIR scheme, and the proof of security is a blackbox reduction that reduces the security of the protocol to the security of any ``standard'' cryptographic assumption, and such that the number of calls made by the reduction to the cheating prover is independent of the security parameter, then the underlying MIP protocol is sound against statistically nosignalling cheating provers. This is joint work with Ran Raz and Ron Rothblum. 

SASW02 
12th April 2012 16:00 to 17:00 
Semantic security for the wiretap channel
The wiretap channel is a setting where one aims to provide informationtheoretic privacy of communicated data based solely on the assumption that the channel from sender to adversary is “noisier” than the channel from sender to receiver. The central quest in this area has been secure schemes of optimal rate but decades of work have provided only non constructive proofs of existence of such schemes for a notion of security that from the cryptographic perspective is weak. We define and prove equivalent two strong security metrics, one based on semantic security and the other on entropy. We then provide a new scheme that achieves these strong security goals in polynomial time with optimal rate, thereby not only solving the central open problem in this area but delivering even more in terms of security.
This is joint work with Mihir Bellare and Alexander Vardy (UCSD).


SASW02 
12th April 2012 17:00 to 18:00 
Rothschild Lecture: Pseudo Deterministic Algorithms and Applications to Cryptography
We describe a new type of probabilistic search algorithm:
a randomized algorithm which is guaranteed to run
in expected polynomial time, and with high probability produce correct
and unique solution. These algorithms are pseudodeterministic: they
can not be distinguished from deterministic algorithms in polynomial
time by a probabilistic polynomial time observer with black box
access.
We will exhibit a necessary and sufficient condition for the existence of a
pseudodeterministic algorithm for an NP relation R.
Several examples of such algorithms, for problems in
number theory, algebra and graph theory which improve on deterministic
solutions, will also be discussed. We note that the characterization
scales up.
The notion of pseudodeterministic
computations is interesting beyond just sequential polynomial time
algorithms, in other domains where the use of randomization is
essential. For example, one can define and obtain pseudodeterministic
fault tolerance distributed algorithms and pseudo deterministic sublinear
algorithms for tasks which are impossible
to solve without randomization. We will discuss several such domains.


SASW02 
13th April 2012 09:00 to 10:00 
M Naor 
The Privacy of the Analyst and The Power of the State
Differential privacy is a paradigm in privacy that is aimed at mitigating the drawbacks of traditional anonymization techniques, as it provides a rigorous guarantee for the added risk to an individual in participating in a database. Roughly speaking, a mechanism satisfies differential privacy if for any possible output of the mechanism and any possible set of data on individuals, the probability of obtaining this particular output changes only very little with the addition or deletion of the data on an individual.
We initiate the study of privacy for the analyst in differentially private data analysis. That is, not only are we concerned with ensuring differential privacy for the data (i.e. individuals or customers), which are the usual concern of differential privacy, but we also consider (differential) privacy for the set of queries posed by each data analyst. The privacy achieved is with respect to other analysts which are the users of the system. This problem of analysts' queries being leaked arises only in the context of stateful privacy mechanisms, in which the responses to queries depend on other queries posed. A recent wave of results in the area of differential privacy utilized coordinated noise and state in order to allow answering hugely many queries. We argue the problem is real by proving an exponential gap between the number of queries that can be answered (with nontrivial error) by stateless and stateful differentially private mechanisms. We then give a stateful algorithm for differentially private data analysis that also ensures differential privacy for the analyst and can answer exponentially many queries. Joint work with Cynthia Dwork and Salil Vadhan. 

SASW02 
13th April 2012 10:00 to 11:00 
How to fake auxiliary input
We show that for any joint distribution $(X,A)$ and any family $F$ of distinguishers, e.g. polynomial size circuits, there exists an efficient (deterministic) simulator $h$ such that $F$ cannot distinguish $(X,A)$ from $(X,h(X))$, i.e. for all $f$ in $F$ we have $\bigl E[f(X,A)]E(f(X,h(X))] \bigr <eps$. Efficient means exponential in the length of $A$ and polynomial in $1/eps$. Several known results, including a strong version of the dense model theorem, follow as simple corollaries. The main motivation for this work is to simplify leakageresilience proofs which currently use the dense model theorem.


SASW02 
13th April 2012 11:30 to 12:30 
B Barak 
Zero Knowledge Proofs and Nuclear Disarmament
I'll describe a physical implementation of zero knowledge proofs whose goal is to verify that two physical objects are identical, without revealing any information about them. Our motivation is the task of verifying that an abouttobedismantled nuclear warhead is authentic without revealing its classified design. This is one of the technical hurdles that arises in implementing nuclear disarmament. Joint work with Alex Glaser and Robert Goldston from Princeton University and the Princeton Plasma Physics Laboratory.


SASW02 
13th April 2012 13:30 to 14:30 
C Fournet 
Cryptographically verified implementation of TLS 1.2
Joint work with Karthik Bhargavan, Markulf Kohlweiss, Alfredo Pironti, and PierreYves Strub.
We develop a reference implementation of the TLS 1.2 Internet Standard. Our code fully supports binary wire formats, multiple ciphersuites, sessions and connections with rehandshakes and resumptions, alerts, message fragmentation, ... as prescribed by the standard; it interoperates with other implementations, notably existing web browsers and servers. At the same time, our code is carefully structured to enable its automated, modular verification, from its main API down to computational security assumptions on its underlying cryptographic algorithms. In the talk, I will review our modular cryptographic verification method, which is based on F7, a refinement type checker coupled with an SMTsolver, with a theory formalized in Coq. Using F7, we verify constructions and protocols coded in F# by typing them against new cryptographic interfaces that capture their authenticity and secrecy properties. We relate their ideal functionalities and concrete implementations, using gamebased program transformations behind typed interfaces, so that we can easily compose them. I will also outline our TLS implementations, coded in F# and specified in F7. I will present its main typed interfaces, which can be read as specifications of "giant" ideal functionalities for its main components, such as authenticated encryption for the record layer and key establishment for the handshake. I will finally describe our main security results, as well as some new attacks we found in the process. 

SASW02 
13th April 2012 14:30 to 15:30 
Y Dodis 
Overcoming Weak Expectations
Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some nontrivial amount of (min)entropy. From a formal point of view, such results require to upper bound the expectation of some function f(X), where X is a weak source in question. We show an elementary inequality which essentially upper bounds such 'weak expectation' by two terms, the first of which is *independent* of f, while the second only depends on the 'variance' of f under *uniform* distribution. Quite remarkably, as relatively simple corollaries of this elementary inequality, we obtain some 'unexpected' results, in several cases noticeably simplifying/improving prior techniques for the same problem. Examples include nonmalleable extractors, leakageresilient symmetric encryption, seeddependent condensers and improved entropy loss for the leftover hash lemma.


SASW02 
13th April 2012 16:00 to 17:00 
Computeraided security proofs
CertiCrypt is a toolset that assists the construction and verification of cryptographic proofs; it supports common patterns of reasoning in cryptography, and has been used successfully to prove the security of many examples, including encryption schemes, signature schemes, zeroknowledge protocols and hash functions. I will present recent developments in the tool and survey new applications.


SAS 
16th April 2012 14:00 to 16:00 
R Glaschick 
Materialization of Universal Turing Machines
Currently, a curious object adorns the lobby of the Isaac Newton Institute for Mathematical Sciences in Cambridge: a physical Turing machine constructed by Gisbert Hasenjaeger.
Hasenjaeger (19192006) was a German mathematician and logician and professor of logic at the universities of Muenster and Bonn in Germany. During the second world war, he was assigned to the central cryptographic unit of the German army and unwittingly competed with Alan Turing, as he was assigned the task of breaking the Enigma code (to test its security).
He did not find the vulnerabilities that Turing exploited in breaking the code, and later said: "Fortunately! Otherwise the war would have lasted even longer!"
In the 1950s and 1960s, Hasenjaeger built several demonstration models of Turing machines. The machine on display at the Newton Institute is a Turing machine with modifications by Moore and Wang.
Our speaker, Rainer Glaschick, investigated the details of the mechanism of this machine and other physical Turing machines that are at the Heinz Nixdorf MuseumsForum in Paderborn, Germany and will report on his experiences with these curious machines.


SAS 
17th April 2012 16:00 to 16:30 
Uniformity, circuits and nondeterministic Turing machines  
SAS 
17th April 2012 16:30 to 17:00 
The computational complexity of small universal Turing machines  
SAS 
17th April 2012 17:00 to 17:30 
T Neary  The programsize complexity of universal Turing machines  
SAS 
1st May 2012 16:00 to 16:30 
A Kolokolova  Proof complexity of expander graph techniques  
SAS 
1st May 2012 16:30 to 17:00 
Cryptographic theory for practice's sake  
SAS 
3rd May 2012 16:00 to 16:30 
Yin and Yang of Computer Science: Algorithms versus Lower Bounds  
SAS 
3rd May 2012 16:30 to 17:00 
Circularity, Paradoxes and Proofs  
SAS 
3rd May 2012 17:00 to 17:30 
Hyperarithmetical categoricity and abelian groups  
SAS 
8th May 2012 16:00 to 16:30 
Ajtai's Completeness Theorem for Nonstandard Finite Structures  
SAS 
8th May 2012 16:30 to 17:00 
Open Problem Session  
SAS 
9th May 2012 14:00 to 15:00 
Valiants Shift problem: A reduction to a problem about graph guessing games
The talk will survey a number of results that were partly developed during my visit at the Newton Institute. I will show that Valiant's Shift problem/conjecture  which has been open for more than 30 years  naturally reduce to questions about guessing games. In the talk I will also provide a new perspective on information bottlenecks in Boolean Circuits/Communication Networks.


SAS 
10th May 2012 16:00 to 16:30 
Consistency, physics, and coinduction
In the first part of this talk we discuss the consistency problem of mathematics. Although we have very well verified mathematical proofs of $\Pi_1$statements such as Fermat's last theorem, we cannot, because of Gödel's 2nd incompleteness theorem, exclude the existence of a counter example with absolute certainty  a counter example would, unless there was a mistake in the proof, prove the inconsistency of the mathematical framework used. This uncertainty has similarities with physics, where we cannot exclude that nature does not follow the laws of physics as determined up to now. All it would imply is that the laws of physics need to be adjusted. Whereas physicists openly admit this as a possibility, in mathematics this fact is not discussed very openly. We discuss how physics and mathematics are in a similar situation. In mathematics we have no criterion to check with absolute certainty that mathematics is consistent. In physics we cannot conduct an experiment which determines any law of physics with absolute certainty. All we can do is to carry out experiments to test whether nature follow in one particular instance the laws of physics formulated by physicists. In fact we know that the laws of physics are incomplete, and therefore not fully correct, and could see a change of the laws of physics during the historical evolving of physics. Changes of the laws of physics did not affect most calculations made before, because these were thoroughly checked by experiments. The changes had to be made only in extreme cases (high speed, small distances). In the same way, we know by reverse mathematics that most mathematical theorems can be proved in relatively weak theories, and therefore would not be affected by a potential inconsistency which probably would make use of proof theoretically very strong principles. In both mathematics and physics we can carry out tests for the axiom systems used. In physics these tests are done by experiments and as well theoretical investigations. In mathematics this is done by looking for counter examples to theorems which have been proved, and by applying the full range of meta mathematical investigations, especially by proof theoretic analysis, normalisation proofs and the formation of constructive foundations of mathematics. As the laws of physics are empirical, so are $\Pi_1$theorems (phrasing it like this is due to an informal comment by Peter Aczel). As one tries in physics to determine the laws of physics and draw conclusions from them, in logic one tries to determine the laws of the infinite, and derive conclusions from those laws. We can obtain a high degree of certainty, but no absolute certainty, and still can have trust in the theorems derived. In the second part we investigate informal reasoning about coinductive statements. 

SAS 
10th May 2012 16:30 to 17:00 
P Aczel  Syntax and Semantics  another look, especially for dependent type theories  
SAS 
10th May 2012 17:00 to 17:30 
Effective Fractal Dimension in Computational Complexity and Algorithmic Information Theory  
SAS 
15th May 2012 16:00 to 16:30 
R Lubarsky  Elementary Real Analysis  
SAS 
15th May 2012 16:30 to 17:00 
Consistency statements in Bounded Arithmetic  
SAS 
17th May 2012 16:00 to 16:30 
S Cook  A Complexity Class Based on Comparator Circuits  
SAS 
17th May 2012 16:30 to 17:00 
Algebravalued models of set theory  
SAS 
17th May 2012 17:00 to 17:30 
Polylogarithmic Cuts of Models of Weak Arithmetic  
SAS 
22nd May 2012 16:00 to 16:30 
M Seisenberger  On the Computational Content of the MinimalBadSequence Argument  
SAS 
22nd May 2012 16:30 to 17:00 
Coinduction in Computable Analysis  
SAS 
22nd May 2012 17:00 to 17:30 
Complexity of exponential integer parts on real closed exponential fields  
SAS 
29th May 2012 16:00 to 16:30 
J Avigad  Uniform distribution and algorithmic randomness  
SAS 
29th May 2012 16:30 to 17:00 
Automorphisms of models of set theory  
SAS 
31st May 2012 16:00 to 16:30 
R Ramanujam 
Bounds on proof size in security verification
In the formal analysis of security protocols, what the intruder infers from messages travelling on the network can be abstracted into checking whether a term is derivable in an inference system from a given finite set of terms. Thus verification amounts (essentially) to studying bounds on proof size. We use this technique in the context of distributive encryption to show that the verification problem is DEXPTIMEcomplete. The work reported is joint with A. Baskar (Darmstadt) and S.P.Suresh (Chennai).


SAS 
31st May 2012 16:30 to 17:00 
Definability via Kalimullin Pairs  
SAS 
31st May 2012 17:00 to 17:30 
Searchable sets and ordinals in system T  
SAS 
7th June 2012 16:00 to 16:30 
A Lewis  The complexity of computable categoricity  
SAS 
7th June 2012 16:30 to 17:00 
Conservative extensions and the jump of a structure
The degree spectrum of an abstract structure is a measure of its
complexity. We consider a relation between abstract structures
$\mathfrak{A}$ and $\mathfrak{B}$, possibly with different
signatures and $\mathfrak{A}\subseteq \mathfrak{B}$, called
conservative extension. We give a characterization of this
relation in terms of definability by computable $\Sigma_n$
formulae on these structures. We show that this relation provides
a finer complexity measure than the one given by degree spectra.
As an application, we receive that the $n$th jump of a structure
and its Marker's extension are conservative extensions of the
original structure. We present a jump inversion theorem for
abstract structures. We prove that for every natural numbers $n$
and $k$ and each complex enough structure $\mathfrak{A}$, there
is a structure $\mathfrak{B}$, such that the definable by
computable $\Sigma^c_n$ formulae sets on $\mathfrak{A}$ are
exactly the definable by computable $\Sigma^c_k$ formulae on
$\mathfrak{B}$.


SASW04 
2nd July 2012 09:30 to 10:30 
Alan Turing in the twentyfirst century: normal numbers, randomness, and finite automata
We discuss ways in which Turing's thenunpublished ``A Note on
Normal Numbers'' foreshadows and can still guide research in our
time.
This research was supported in part by NSF Grant 0652569. Part of the work was done while the author was on sabbatical at Caltech and the Isaac Newton Institute for Mathematical Sciences at the University of Cambridge. 

SASW04 
2nd July 2012 11:00 to 12:00 
Demuth randomness and its variants
A Demuth test is like a MartinLöf test with the passing condition
to be out of infinitely many components; the strength of the test is
enhanced by the possibility to exchange the $n$th component of the test
a computably bounded number of times. Demuth introduced Demuth randomness
of reals in 1982, and showed that the Denjoy alternative for Markov
computable functions holds at any such real. In [1] we proved
that Demuth's hypothesis is in fact too strong: difference randomness
(i.e., MLrandomness together with incompleteness) of the real already
suffices. However, Demuth randomness now plays an important role in
the interaction of computability and randomness. The basic relevant fact
here is that a Demuth random set can be below the halting problem.
In [2] we characterized lowness for Demuth randomness by a property called BLRtraceability, in conjunction with being computably dominated. The low for Demuth random sets form a proper subclass of the computably traceable sets used by Terwijn and Zambella to characterize lowness for Schnorr randomness. The covering problem asks whether each Ktrivial set $A$ is Turing below a difference random set $Y$. Combining work of Kucera and Nies [3] with results of Downey, Diamondstone, Greenberg and Turetsky gives an affirmative answer to an analogous question: a set $A$ is strongly jump traceable if and only if it is below a Demuth random set $Y$. In recent work, Bienvenu, Greenberg, Kucera, Nies, and Turetsky introduced a weakening of Demuth randomness called Oberwolfach randomness. They used it to build a ``smart'' Ktrivial set $A$: it is difficult to cover in that any MartinLöf random set $Y$ above $A$ must be LRhard. [2] Bienvenu, Downey, Greenberg, Nies, and Turetsky. "Characterizing lowness for Demuth Randomness." Submitted. [1] Bienvenu, Hoelzl, Miller, and Nies. "The Denjoy alternative for computable functions." Proceedings of STACS 2012. [3] Kucera, A. and Nies, A. Demuth. "randomness and computational complexity." Annals of Pure and Applied Logic 162 (2011) 504–513. 

SASW04 
2nd July 2012 12:00 to 12:30 
Nonstandard Analysis: a new way to compute
Constructive Analysis was introduced by Errett Bishop to identify the
`computational meaning' of mathematics. Bishop redeveloped mathematics,
in the spirit of intuitionistic mathematics, based on primitive notions
like algorithm, explicit computation, and finite
procedure. The exact meaning of these vague terms was left open,
to ensure the compatibility of Constructive Analysis with several
traditions in mathematics. Constructive Reverse Mathematics (CRM)
is a spinoff from Harvey Friedman's famous Reverse Mathematics
program, based on Constructive Analysis.
In this talk, we introduce `$\Omega$invariance': a simple and elegant definition of finite procedure in (classical) Nonstandard Analysis. We show that $\Omega$invariance captures Bishop's notion of algorithm quite well. In particular, using an intuitive interpretation based on $\Omega$invariance, we obtain many results from CRM inside Nonstandard Analysis. Similar results for Computability (aka Recursion) Theory are also discussed. This research is made possible through the generous support of a grant from the John Templeton Foundation for the project Philosophical Frontiers in Reverse Mathematics. Please note that the opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. 

SASW04 
2nd July 2012 14:00 to 15:00 
Topological arguments in Kolmogorov complexity
We show how topological arguments (simple facts about nonhomotopic mappings) can be used to prove result about Kolmogorov complexity. In particular, we show that for every string x of complexity at least n +c log n one can find a string y such that both conditional complexities C(xy) and C(yx) are equal to n+O(1).
See also: http://arxiv.org/abs/1206.4927 

SASW04 
2nd July 2012 15:00 to 15:30 
Schnorr triviality is equivalent to being a basis for ttSchnorr randomness
We present some new characterizations of Schnorr triviality. Schnorr
triviality is defined using complexity via a computable measure machine,
with which Schnorr randomness has a characterization. Since we have a
characterization of Schnorr randomness via decidable prefixfree machine,
we also have a characterization of Schnorr triviality using complexity
via a decidable prefixfree machine. It should be noted that numerous
characterizations of Schnorr triviality have the following form: for any
computable object, there exists another computable object such that the
real is in some object. By defining a basis for Schnorr randomness in a
similar manner, we can show the equivalence to Schnorr triviality while
Franklin and Stephan (2010) showed that there exists a Schnorr trivial
set that is not truthtable reducible to any Schnorr random set.


SASW04 
2nd July 2012 17:00 to 17:30 
Two betting strategies that predict all compressible sequences
A new type of betting games that charaterize MartinLöf randomness
is introduced. These betting games can be compared to martingale
processes of Hitchcock and Lutz as well as nonmonotonic betting
strategies. Sequenceset betting is defined as successive betting on
prefixfree sets that contain a finite number of words. In each iteration
we start with an initial prefixfree set $P$ and an initial capital
$c$, then we divide $P$ into two prefixfree sets $P_{0}$ and $P_{1}$
of equal size and wager some amount of capital on one of the sets, let's
say $P_{0}$. If the infinite sequence we are betting on has a prefix
in $P_{0}$ then in the next iteration the initial set is $P_{0}$ and
the wagered amount is doubled. If the infinite sequence we are betting
on does not have a prefix in $P_{0}$ then in the next iteration the
initial set is $P_{1}$ and the wagered amount is lost. In the first
iteration the initial prefixfree set contains the empty string. The
player succeeds on the infinite sequence if the series of bets increases
capital unboundedly. Nonmonotonic betting can be viewed as sequenceset
betting with an additional requirement that the initial prefixfree set
is divided into two prefixfree sets such that sequences in one set have
at some position bit 0 and in the other have at that same position bit
1. On the other hand if the requirement that the initial prefixfree set
$P$ is divided into two prefixfree sets of equal size is removed, and
we allow that $P_{0}$ may have a different size from $P_{1}$ we have a
betting game that is equivalent to martingale processes in the sense that
for each martingale process there is a betting strategy that succeeds on
the same sequences as martingale process and for each betting strategy
a martingale process exists that succeeds on the the same sequences as
the betting strategy. It is shown that, unlike martingale processes,
for any computable sequenceset betting strategy there is an infinite
sequence on which betting strategy doesn't succeed and which is not
MartinLöf random. Furthermore it is shown that there is an algorithm
that constructs two sets of betting decisions for two sequenceset
betting strategies such that for any sequence that is not MartinLöf
random at least one of them succeeds on that sequence.


SASW04 
2nd July 2012 17:30 to 18:00 
Computable randomness and its properties
Computable randomness at first does not seem as natural of a randomness
notion as Schnorr and MartinLöf randomness. However, recently
Brattka, Miller, and Nies [1] have shown that computable randomness
is closely linked to differentiability. Why is this so? What are the
chances that, say, computable randomness will also be linked to the
ergodic theorem? In this talk I will explain how computable randomness
is similar to and how it is different from other notions of randomness.
Unlike other notions of randomness, computable randomness is closely linked to the Borel sigmaalgebra of a space. This has a number of interesting implications:
However, this line of research still leaves many open questions about the nature of computable randomness and the nature of randomness in general. I believe the tools used to explore computable randomness may have other applications to algorithmic randomness and computable analysis. [1] Vasco Brattka, Joseph S. Miller, and André Nies. "Randomness and differentiability." Submitted. [2] Jason Rute. "Computable randomness and betting for computable probability spaces." In preparation. 

SASW04 
3rd July 2012 09:00 to 10:00 
Propagation of partial randomness
Let $X$ be an infinite sequence of $0$'s and $1$'s, i.e.,
$X\in\{0,1\}^\mathbb{N}$. Even if $X$ is not MartinLöf random,
we can nevertheless quantify the amount of partial randomness
which is inherent in $X$. Many researchers including Calude,
Hudelson, KjosHanssen, Merkle, Miller, Reimann, Staiger, Tadaki,
and Terwijn have studied partial randomness. We now present some new
results due to Higuchi, Hudelson, Simpson and Yokoyama concerning
propagation of partial randomness. Our results say that
if $X$ has a specific amount of partial randomness, then $X$ has
an equal amount of partial randomness relative to certain Turing
oracles. More precisely, let $\mathrm{KA}$ denote a priori
Kolmogorov complexity, i.e., $\mathrm{KA}(\sigma)=\log_2m(\sigma)$
where $m$ is Levin's universal leftr.e. semimeasure. Note that
$\mathrm{KA}$ is similar but not identical to the more familiar
prefixfree Kolmogorov complexity. Given a computable function
$f:\{0,1\}^*\to[0,\infty)$, we say that $X\in\{0,1\}^\mathbb{N}$
is strongly $f$random if $\exists c\,\forall
n\,(\mathrm{KA}(X{\upharpoonright}\{1,\ldots,n\})>f(X{\upharpoonright}\{1,\ldots,n\})c)$.
Two of our results read as follows.
Theorem 1. Assume that $X$ is strongly $f$random and Turing reducible to $Y$ where $Y$ is MartinLöf random relative to $Z$. Then $X$ is strongly $f$random relative to $Z$. Theorem 2. Assume that $\forall i\,(X_i$ is strongly $f_i$random$)$. Then, we can find a $\mathrm{PA}$oracle $Z$ such that $\forall i\,(X_i$ is strongly $f_i$random relative to $Z)$. We also show that Theorems 1 and 2 fail badly with $\mathrm{KA}$ replaced by $\mathrm{KP}=$ prefixfree complexity. 

SASW04 
3rd July 2012 10:00 to 10:30 
Computably enumerable partial orders
We study the degree spectra and reversemathematical applications
of computably enumerable and cocomputably enumerable partial
orders. We formulate versions of the chain/antichain principle
and ascending/descending sequence principle for such orders, and
show that the latter is strictly stronger than the latter. We then
show that every $\emptyset'$computable structure (or even just
of c.e. degree) has the same degree spectrum as some computably
enumerable (coc.e.) partial order, and hence that there is a
c.e. (coc.e.) partial order with spectrum equal to the set of
nonzero degrees.
A copy of the submitted paper can be found at http://www.nd.edu/~cholak/papers/ceorderings.pdf 

SASW04 
3rd July 2012 11:00 to 12:00 
On the computational content of the Baire Category Theorem
We present results on the classification of the computational content of
the Baire Category Theorem in the Weihrauch lattice. The Baire Category
Theorem can be seen as a pigeonhole principle that states that a large
(= complete) metric space cannot be decomposed into a countable number
of small (= nowhere dense) pieces (= closed sets). The difficulty of
the corresponding computational task depends on the logical form of the
statement as well as on the information that is provided. In the first
logical form the task is to find a point in the space that is left out
by a given decomposition of the space that consists of small pieces.
In the contrapositive form the task is to find a piece that is not small
in a decomposition that exhausts the entire space. In both cases pieces
can be given by descriptions in negative or positive form. We present a
complete classification of the complexity of the Baire Category Theorem in
all four cases and for certain types of spaces. The results are based on
joint work with Guido Gherardi and Alberto Marcone, on the one hand, and
Matthew Hendtlass and Alexander Kreuzer, on the other hand. One obtains
a refinement of what is known in reverse mathematics in this way.
[1] Vasco Brattka and Guido Gherardi. "Effective choice and boundedness principles in computable analysis." The Bulletin of Symbolic Logic, 17(1):73–117, 2011. [2] Vasco Brattka and Guido Gherardi. "Weihrauch degrees, omniscience principles and weak computability." The Journal of Symbolic Logic, 76(1):143–176, 2011. [3] Vasco Brattka, Guido Gherardi, and Alberto Marcone. "The BolzanoWeierstrass theorem is the jump of weak KŐnig's lemma." Annals of Pure and Applied Logic, 163:623–655, 2012. [4] Vasco Brattka, Matthew Hendtlass, and Alexander P. Kreuzer. "On the Weihrauch complexity of computability theory." unpublished notes, 2012.
[5] Vasco Brattka.
"Computable versions of Baire's category theorem."
In Jiří Sgall, Aleš Pultr, and Petr Kolman, editors,
Mathematical Foundations of Computer Science 2001,
volume 2136 of Lecture Notes in Computer Science, pages 224–235, Berlin, 2001.
Springer. [6] Douglas K. Brown and Stephen G. Simpson. "The Baire category theorem in weak subsystems of second order arithmetic." The Journal of Symbolic Logic, 58:557–578, 1993. 

SASW04 
3rd July 2012 12:00 to 12:30 
Trivial measures are not so trivial
Although algorithmic randomness with respect to various biased computable
measures is wellstudied, little attention has been paid to algorithmic
randomness with respect to computable trivial measures, where
a measure $\mu$ on $2^\omega$ is trivial if the support of $\mu$ consists
of a countable collection of sequences. In this talk, I will show
that there is much more structure to trivial measures than has been
previously suspected. In particular, I will outline the construction
of
Lastly, I will show that, if we consider the class of $LR$degrees associated with a trivial measure $\mu$ (generalizing the standard definition of the $LR$degrees), then for every finite distributive lattice $\mathcal{L}=(L, \leq)$, there is a trivial measure $\mu$ such that the the collection of $LR$degrees with respect to $\mu$ is a finite distributive lattice that is isomorphic to $\mathcal{L}$. 

SASW04 
3rd July 2012 14:00 to 15:00 
On the inversion of computable functions
Ergodic shiftinvariant measures inherit many effective properties of
the uniform measure: for instance, the frequency of $1$'s in a typical
sequence converge effectively, hence it converges at every Schnorr random
sequence; the convergence is robust to small violations of randomness
[1]; every MartinLöf random sequence has a tail in every
effective closed set of positive measure [2]. These
properties are generally not satisfied by a nonergodic measure, unless
its (unique) decomposition into a combination of ergodic measures is
effective. V'yugin [3] constructed a computable nonergodic
measure whose decomposition is not effective. This measure is a countable
combination of ergodic measures. What happens for finite combinations? Is
there a finitely but noneffectively decomposable measure?
We prove that the answer is positive: there exist two noncomputable ergodic measures $P$ and $Q$ such that $P+Q$ is computable. Moreover, the set of pairs $(P,Q)$ such that neither $P$ nor $Q$ is computable from $P+Q$ is large in the sense of Baire category. This result can be generalized into a theorem about the inversion of computable functions, which gives sufficient conditions on a onetoone computable function $f$ that entail the existence of a noncomputable $x$ such that $f(x)$ is computable. We also establish a stronger result ensuring the existence of a ``sufficiently generic'' $x$ such that $f(x)$ is computable, in the spirit of Ingrassia's notion of $p$genericity [4]. [1] Vladimir V. V'yugin. "Nonrobustness property of the individual ergodic theorem." Problems of Information Transmission, 37(2):27–39, 2001. [2] Laurent Bienvenu, Adam Day, Ilya Mezhirov, and Alexander Shen. "Ergodictype characterizations of algorithmic randomness." In Computability in Europe (CIE 2010), volume 6158 of LNCS, pages 49–58. Springer, 2010. [3] Vladimir V. V'yugin. "Effective convergence in probability and an ergodic theorem for individual random sequences." SIAM Theory of Probability and Its Applications, 42(1):39–50, 1997. [4] M.A. Ingrassia. Pgenericity for Recursively Enumerable Sets. University of Illinois at UrbanaChampaign, 1981. 

SASW04 
3rd July 2012 15:00 to 15:30 
(Almost) Lowness for K and finite selfinformation
A real $X$, is called low for K if there is a constant $c$
such that using $X$ as an oracle does not decrease the Kolmogorov
complexity of any string by more than $c$. That is, the inequality
$K(\sigma) \leq K^{X}(\sigma) +c$ holds for all $\sigma \in
2^{


SASW04 
3rd July 2012 17:00 to 17:30 
Prefix and plain Kolmogorov complexity characterizations of 2randomness: simple proofs
Joseph Miller[1] and independently Andre Nies, Frank Stephan and Sebastian
Terwijn[2] gave a complexity characterization of $2$random sequences
in terms of plain Kolmogorov complexity $C\,(\cdot)$: they are sequences
that have infinitely many initial segments with $O(1)$maximal plain
complexity (among the strings of the same length).
Later Miller[3] (see also [4]) showed that prefix complexity $K\,(\cdot)$ can be also used in a similar way: a sequence is $2$random if and only if it has infinitely many initial segments with $O(1)$maximal prefix complexity (which is $n+K\,(n)$ for strings of length~$n$). The known proofs of these results are quite involved; we provide simple direct proofs for both of them. In [1] Miller also gave a quantitative version of the first result: the $0'$randomness deficiency of a sequence $\omega$ equals $\liminf_n [n  C\,(\omega_1\dots\omega_n)] + O(1)$. (Our simplified proof also can be used to prove this quantitative version.) We show (and this seems to be a new result) that a similar quantitative result is true also for prefix complexity: $0'$randomness deficiency $d^{0'}(\omega)$ equals also $\liminf_n [n + K\,(n)  K\,(\omega_1\dots\omega_n)]+ O(1)$. This completes the picture: \begin{eqnarray*} d^{0'}(\omega) &=& \sup_n \, \left[ n  K\,^{0'}(\omega_1\dots\omega_n) \right] + O(1) \\ &=& \liminf_n \, \left[ n  C\,(\omega_1\dots\omega_n) \right] + O(1) \\ &=& \liminf_n \, \left[ n + K\,(n)  K\,(\omega_1\dots\omega_n) \right] + O(1) \,. \end{eqnarray*} [1] J.S. Miller. "Every 2random real is Kolmogorov random." Journal of Symbolic Logic, 69(3):907–913, 2004. [2] A. Nies, F. Stephan, and S.A. Terwijn. "Randomness, relativization and turing degrees." The Journal of Symbolic Logic, 70(2), 2005. [3] J.S. Miller. "The Kdegrees, low for Kdegrees, and weakly low for K sets." Notre Dame Journal of Formal Logic, 50(4):381–391, 2009. [4] R.G. Downey and D.R. Hirschfeldt. "Algorithmic Randomness and Complexity." Theory and Applications of Computability. Springer, 2010. 

SASW04 
4th July 2012 09:00 to 10:00 
Limitations of Efficient Reducibility to the Kolmogorov Random Strings  
SASW04 
4th July 2012 10:00 to 10:30 
Language compression for sets in P/poly
If we consider a finite set $A$, it is desirable to represent every
$x$ in $A$ by another shorter string compressed($x$) such that
compressed($x$) describes unambiguously the initial $x$. Regarding
the compression rate, ideally, one would like to achieve the
informationtheoretical bound compressed($x$) $\approx \log (A)$,
for all $x$ in $A$. This optimal rate is achievable for c.e. (and also
coc.e.) sets $A$, because for such a set C($x$) $\leq \log (A^{=n})
+ O(\log n)$ (C($x$) is the Kolmogorov complexity of string $x$ and $n$
is the length of $x$). In the timebounded setting, we would like to have
a polynomialtime type of unambiguous description. The relevant concept
is the timebounded distinguishing complexity, CD(), introduced by
Sipser. The $t$time bounded distinguishing complexity of $x$, denoted
CD$^t(x)$ is the length of the shortest program $p$ that accepts $x$
and only $x$ within time $t(x)$. Recently we have shown that for
sets in P, NP, PSPACE, optimal compression can be achieved, using some
reasonable hardness assumptions. We show that this also holds for sets
in P/poly, i.e., for sets computable by polynomialsize
circuits. Furthermore, we sketch here a different proof method (even
though some key elements are common) suggested by Vinodchandran, which
needs a weaker hardness assumption.


SASW04 
4th July 2012 11:00 to 12:00 
M Koucky 
The story of superconcentrators – the missing link
In 60's and 70's directed graphs with strong connectivity property were linked to proving lower bounds on complexity of solving various computational problems. Graphs with strongest such property were named superconcentrators by Valiant (1975). An nsuperconcentrator is a directed acyclic graph with n designated input nodes and n designated output nodes such that for any subset X of input nodes and any equalsized set Y of output nodes there are X vertex disjoint paths connecting the sets. Contrary to previous conjectures Valiant showed that there are nsuperconcentrators with O(n) edges thus killing the hope of using them to prove lower bounds on computation. His nsuperconcentrators have depth O(log n). Despite this setback, superconcentrators found their way into lower bounds in the setting of boundeddepth circuits. They provide asymptotically optimal bounds for computing many functions including integer addition, and most recently computing good errorcorrectin g codes. The talk will provide an exposition of this fascinating area.


SASW04 
4th July 2012 12:00 to 12:30 
Autoreducibility for NEXP
Autoreducibility was first introduced by Trakhtenbrot in a recursion
theoretic setting. A set $A$ is autoreducible if there is an oracle
Turing machine $M$ such that $A = L(M^A)$ and M never queries $x$
on input $x$. AmbosSpies introduced the polynomialtime variant of
autoreducibility, where we require the oracle Turing machine to run in
polynomial time.
The question of whether complete sets for various classes are polynomialtime autoreducible has been studied extensively. In some cases, it turns out that resolving autoreducibility can result in interesting complexity class separations. One question that remains open is whether all Turing complete sets for NEXP are Turing autoreducible. An important separation may result when solving the autoreducibility for NEXP; if there is one Turing complete set of NEXP that is not Turing autoreducible, then EXP is different from NEXP. We do not know whether proving all Turing complete sets of NEXP are Turing autoreducible yields any separation results. Buhrman et al. showed that all ${\le_{\mathit{2\mbox{}tt}}^{\mathit{p}}}$complete sets for EXP are ${\le_{\mathit{2\mbox{}tt}}^{\mathit{p}}}$autoreducible. This proof technique exploits the fact that EXP is closed under exponentialtime reductions that only ask one query that is smaller in length. Difficulties arise when we want to prove that the above result holds for NEXP, because we do not know whether this property still holds for NEXP. To resolve that difficulty, we use a nondeterministic technique that applies to NEXP and obtain the similar result for NEXP; that is, all ${\le_{\mathit{2\mbox{}tt}}^{\mathit{p}}}$complete sets for NEXP are ${\le_{\mathit{2\mbox{}tt}}^{\mathit{p}}}$autoreducible. Using similar techniques, we can also show that every disjunctive and conjunctive truthtable complete set for NEXP is disjunctive and conjunctive truthtable autoreducible respectively. In addition to those positive results, negative ones are also given. Using different notions of reductions, we can show that there is a complete set for NEXP that is not autoreducible. Finally, in the relativized world, there is a ${\le_{\mathit{2\mbox{}T}}^{\mathit{p}}}$complete set for NEXP that is not Turing autoreducible. 

SASW04 
5th July 2012 09:00 to 10:00 
SJThardness and pseudojump inversion
Tracing was introduced to computability theory by Terwijn and Zambella, who used it to characterize the degrees which are low for Schnorr randomness. Zambella observed that tracing also has a relationship to Ktriviality, which was strengthened by Nies and then later others.
A trace for a partial function f is a sequence of finite sets (T_z) with f(z) in T_z for all z in the domain of f. A trace (T_z) is c.e. if the T_z are uniformly c.e. sets.
An order function is a total, nondecreasing, unbounded positive function. If h is an order, a trace (T_z) is an htrace if h(z) bounds the size of T_z.
A set is called jumptraceable (JT) if every partial function it computes has a c.e. htrace for some computable order h. A set is called strongly jumptraceable (SJT) if every partial function it computes has a c.e. htrace for every computable order h.
Figuiera et al. constructed a noncomputable c.e. set which is SJT. This gives a natural pseudojump operator. Pseudojump inverting this SJToperator gives a set A such that the halting problem looks SJT relative to A. That is, for every partial function computable from the halting problem, and every computable order h, there is an htrace which is uniformly c.e. relative to A. Such a set is called SJThard.
Downey and Greenberg showed that there is a noncomputable c.e. set E which is computable from every c.e. SJThard set. Thus the SJToperator cannot be pseudojump inverted outside of the cone above E. We strengthen this result, showing that E can be made super high.


SASW04 
5th July 2012 10:00 to 10:30 
Translating the Cantor set by a random
I will discuss the constructive dimension of points in random translates
of the Cantor set. The Cantor set ``cancels randomness'' in the sense
that some of its members, when added to MartinLöf random reals,
identify a point with lower constructive dimension than the random itself.
In particular, we find the Hausdorff dimension of the set of points in
a Cantor set translate with a given constructive dimension.
More specifically, let $\mathcal{C}$ denote the standard middle third Cantor set, and for each real $\alpha$ let $\mathcal{E}_{\alpha}$ consist of all real numbers with constructive dimension $\alpha$. Our result is the following.
If $1 \log2/\log3 \leq \alpha \leq 1$ and $r$ is a MartinLöf random real, then the Hausdorff dimension of the set $ (\mathcal{C}+r) \cap \mathcal{E}_{\alpha}$ is $\alpha (1 \log 2/\log 3)$. From this theorem we obtain a simple relation between the effective and classical Hausdorff dimensions of this set; the difference is exactly $1$ minus the dimension of the Cantor set. We conclude that many points in the Cantor set additively cancel randomness. On the surface, the theorem above describes a connection between algorithmic randomness and classical fractal geometry. Less obvious is its relationship to additive number theory. In 1954, G. G. Lorentz proved the following statement.
There exists a constant $c$ such that for any integer $k$, if $A\subseteq [0, k)$ is a set of integers with ${\leftA\right} \geq \ell \geq 2$, then there exists a set of integers $B \subseteq (k,k)$ such that $A + B \supseteq [0, k)$ with ${\leftB\right} \leq ck\frac{\log \ell}{\ell}$. Given a MartinLöf random real $r$, I will show how Lorentz's Lemma can be used to identify a point $x\in\mathcal{C}$ such that the constructive dimension of $x+r$ is close to $1  \log 2 / \log 3$, which is as small as it can possibly be. This talk is based on joint work with Randy Dougherty, Jack Lutz, and Dan Mauldin. 

SASW04 
5th July 2012 11:00 to 12:00 
Exact pairs for the ideal of the $K$trivial sequences in the Turing degrees
The $K$trivial sets form an ideal in the Turing degrees, which is
generated by its computably enumerable (c.e.) members and has an exact
pair below the degree of the halting problem. The question of whether
it has an exact pair in the c.e. degrees was first raised in a published
list of questions in the Bulletin of Symbolic Logic in 2006 by Miller and
Nies and later in Nies' book on computability and randomness. Moreover it
was featured in several conferences in algorithmic randomness, since 2005.
We give a negative answer to this question. In fact, we show the following stronger statement in the c.e. degrees. There exists a $K$trivial degree $\mathbf{d}$ such that for all degrees $\mathbf{a}, \mathbf{b}$ which are not $K$trivial and $\mathbf{a}>\mathbf{d}, \mathbf{b}>\mathbf{d}$ there exists a degree $\mathbf{v}$ which is not $K$trivial and $\mathbf{a}>\mathbf{v}, \mathbf{b}>\mathbf{v}$. This work sheds light to the question of the definability of the $K$trivial degrees in the c.e. degrees. 

SASW04 
5th July 2012 12:00 to 12:30 
P A Heiber 
Normality and Differentiability
By transferring to the world of functions computable by finite automata the classical theorem of numerical analysis establishing that every nondecreasing real valued function is almost everywhere differentiable, we obtain a characterization of the property of Borel normality. We consider functions mapping infinite sequences to infinite sequences and a notion of differentiability that, on the class of nondecreasing real valued functions, coincides with standard differentiability. We prove that the following are equivalent, for a real x in [0,1]:
(1) x is normal to base b. (2) Every nondecreasing function computable by a finite automaton mapping infinite sequences to infinite sequences is differentiable at the expansion of x in base b. (3) Every nondecreasing function computable by a finite automaton in base b mapping real numbers to real numbers is differentiable at x. Joint work with Verónica Becher, Universidad de Buenos Aires. 

SASW04 
5th July 2012 14:00 to 14:30 
Kolmogorov complexity and Fourier aspects of Brownian motion
It is wellknown that the notion of randomness, suitably refined, goes a long way in dealing with the tension between the ``incompatability of shortest descriptions and of effecting the mosteconomical algorithmical processing" Manin(2006). In this work, we continue to explore this interplay between short descriptions and randomness in the context of Brownian motion and its associated geometry. In this way one sees how random phenomena associated with the geometry of Brownian motion, are implicitly enfolded in each real number which is complex in the sense of Kolmogorov. These random phenomena range from fractal geometry, Fourier analysis and nonclassical noises in quantum physics. In this talk we shall discuss countable dense random sets as the appear in the theory of Brownian motion in the context of algorithmic randomness. We shall also discuss applications to Fourier analysis. In particular, we also discuss the images of certain $\Pi_2^0$ perfect sets of Hausdorff dimension zero under a complex oscillation (which is also known as an algorithmically random Brownian motion). This opens the way to relate certain nonclassical noises to Kolmogorov complexity. For example, the work of the present work enables one to represent Warren's splitting noise directly in terms of infinite binary strings which are KolmogorovChaitinMartinLöf random.


SASW04 
5th July 2012 14:30 to 15:00 
Ergodic theory and strong randomness notions
There has been a great deal of interest recently in the connection between
algorithmic randomness and ergodic theory, which naturally leads to the
question of how much one can say if the transformations in question need
not be ergodic. We have essentially reversed a result of V'yugin and shown
that if an element of the Cantor space is not MartinLöf random, then
there is a computable function and a computable transformation for which
this element is not typical with respect to the ergodic theorem. More
recently, we have shown that every weakly 2random element of the Cantor
space is typical with respect to the ergodic theorem for every lower
semicomputable function and computable transformation. I will explain
the proof of the latter result and discuss the technical difficulties
present in producing a full characterization.


SASW04 
5th July 2012 15:00 to 15:30 
Z Reznikova 
Integration of ideas and methods of Kolmogorov Complexity and classical mathematical statistics
A new approach is suggested which allows to combine the advantages of
methods based on Kolmogorov complexity with classical methods of
testing of statistical hypotheses. As distinct from other approaches
to analysis of different sequences by means of Kolmogorov complexity,
we stay within the framework of mathematical statistics. As examples,
we consider behavioural sequences of animals (ethological ``texts'')
testing the hypotheses whether the complexity of hunting behaviour in
ants and rodents depends on the age of an individual.


SASW04 
5th July 2012 17:00 to 17:30 
D Ryabko 
Limit capacity of nonstochastic steganographic systems and Hausdorff dimension
It was shown recently that the limit capacity of perfect steganography
systems for i.i.d. and Markov sources equals the Shannon entropy of
the ``cover'' process. Here we address the problem of limit capacity
of general perfect steganographic systems. We show that this value
asymptotically equals the Hausdorff dimension of the set of possible
cover messages.


SASW04 
6th July 2012 10:00 to 10:30 
Cryptography and Algorithmic Randomness
In modern cryptography, the random oracle model is widely used as an imaginary framework in which the security of a cryptographic scheme is discussed. In the random oracle model, the cryptographic hash function used in a cryptographic scheme is formulated as a random variable uniformly distributed over all possibility of the function, called the random oracle, and the legitimate users and the adversary against the scheme are modeled so as to get the values of the hash function not by evaluating it in their own but by querying the random oracle. Since the random oracle is an imaginary object, even if the security of a cryptographic scheme is proved in the random oracle model, the random oracle has to be instantiated using a concrete cryptographic hash function such as the SHA hash functions if we want to use the scheme in the real world. However, it is not clear how much the instantiation can maintain the security originally proved in the random oracle model, nor is it clear w hether the random oracle can be instantiated somehow while keeping the original security. In the present talk we investigate this problem using concepts and methods of algorithmic randomness. Our results use the general form of definitions of security notions for cryptographic schemes, and depend neither on specific schemes nor on specific security notions.


SASW04 
6th July 2012 11:00 to 12:00 
A Lewis 
The typical Turing degree
Since Turing degrees are tailsets, it follows from Kolmogorov's 01 law
that for any property which may or may not be satisfied by any given
Turing degree, the satisfying class will either be of Lebesgue measure
0 or 1, so long as it is measurable. So either the typical
degree satisfies the property, or else the typical degree satisfies its
negation. Further, there is then some level of randomness sufficient
to ensure typicality in this regard. I shall describe a number of
results in a largely new programme of research which aims to establish the
(order theoretically) definable properties of the typical Turing degree,
and the level of randomness required in order to guarantee typicality.
A similar analysis can be made in terms of Baire category, where a standard form of genericity now plays the role that randomness plays in the context of measure. This case has been fairly extensively examined in the previous literature. I shall analyse how our new results for the measure theoretic case contrast with existing results for Baire category, and also provide some new results for the category theoretic analysis. This is joint work with George Barmpalias and Adam Day. 

SASW04 
6th July 2012 12:00 to 12:30 
Algorithmic randomness and stochastic selection function
For $x=x_1x_2\cdots,y=y_1y_2\cdots\in\{0,1\}^\infty$, let
$x^n:=x_1\cdots x_n$ and $x/y:=x_{\tau(1)}x_{\tau(2)}\cdots$ where
$\{i\mid y_i=1\}=\{\tau(1)
The following two statements are equivalent.
The author do not know whether we can drop the assumption that $x$ is random w.r.t. some computable probability in (ii), i.e., whether we can replace (ii) with $x/y\in R^u$ for $y\in Y^x$ where $Y^x$ consists of nontrivial selection functions and depends on $x$. We also have a similar algorithmic analogy for Steinhause theorem [1]. Let $w$ be a computable probability such that
Then the following two statements are equivalent.
For example, $w:=\int P_\rho d\rho$, where $P_\rho$ is a probability derived from irrational rotation with parameter $\rho$, satisfies the condition of Prop. 2, see [2]. There are similar algorithmic analogies for Kamae's theorem [4], see [3]. [1] H. Steinhaus. "Les probabilités dénombrables et leur rapport à la théorie de la meésure." Fund. Math., 4:286–310, 1922. [2] H. Takahashi and K. Aihara. "Algorithmic analysis of irrational rotations in a sigle neuron model." J. Complexity, 19:132–152, 2003. [3] H. Takahashi. "Algorithmic analogies to KamaeWeiss theorem on normal numbers." In Solomonoff 85th memorial conference. To appear in LNAI. [4] T. Kamae. "Subsequences of normal numbers." Israel J. Math., 16:121–149, 1973. 

SASW04 
6th July 2012 14:00 to 15:00 
Cupping with random sets
A set $X$ is MLcuppable if there exists an incomplete MartinLöf
random $R$ that joins $X$ to zero jump. It is weakly MLcuppable if
there exists an incomplete MartinLöf random $R$ that joins $X$
above zero jump. We prove that a set is Ktrivial if and only if it is
not weakly MLcuppable. Further, we show that a set below zero jump is
Ktrivial if and only if it is not MLcuppable. These results settle a
question of Kučera, who introduced both cuppability notions. This
is joint work with Joseph S. Miller.


SASW04 
6th July 2012 16:00 to 17:00 
Resolute sets and initial segment complexity
Notions of triviality have been remarkably productive in algorithmic
randomness,with $K$triviality being the most notable. Of course,
ever since the original characterization of MartinLöf randomness
by initial segment complexity, there has been a longstanding interplay
between initial segment complexity and calibrations of randomness,
as witnessed by concepts such as autocomplexity, and the like.
In this talk I wish to discuss recent work with George Barmpalias on a
triviality notion we call resoluteness. Resoluteness is defined in
terms of computable shifts by is intimately related to a notion we call
weak resoluteness where $A$ is weakly resolute iff for all computable
orders $h$, $K(A\uparrow n) \ge^+ K(A\uparrow h(n)), $ for all $n$. It
is not difficult to see that $K$trivials have this property but it
turns out that there are uncountablly many degrees which are completely
$K$resolute, and there are c.e. degrees also with this property.
These degrees seem related to LathropLutz ultracompressible degrees.
Our investigations are only just beginning and I will report on our progress.
Joint work with George Barmpalias.
