login · source · print

Bachelor and Master Research Projects at SWS

The topic driving the research carried out in the department of Software Science are models and modeling. Models provide abstractions of systems, artificial or natural, that allow reasoning about properties of these systems, ignoring extraneous details while focusing on relevant ones. Explicit models have always played a key role in science and engineering. There is now a clear trend in computer and information science towards the systematic use of models as the primary artifacts throughout the engineering lifecycle of computer-based systems. Requirements, behavior, functionality, construction and testing strategies of computer-based systems are all described in terms of models. Models are not only used to reason about a system, but also used to allow all stakeholders to participate in the development process and to communicate with each other, to generate implementations, and to facilitate reuse.

Note: there are opportunities for doing internships at the prestigious Max Planck Institute. For Software Science in particular the MPI for Software Systems is of interest. Contact Frits Vaandrager in case you are interested.

Bachelor thesis projects are written within the context of the corresponding course. General info about the process of doing your master thesis, including the administrative procedure, is available at the Master Thesis Lab webpages.There are many opportunities for bachelor and master research projects within the SWS department. They can be grouped in the following four themes:

  1. Embedded Systems
  2. Model Learning and Model-Based Testing
  3. Task Oriented and Functional Programming
  4. Foundations
  5. Computing Education

Some of the listed topics (especially internships) are only suitable as an MSc project. For most topics however, both a bachelor and master thesis project can be defined.

Theme I: Embedded Systems

Embedded systems constitute a natural application area for the technologies developed within our group. We collaborate with several major companies in this area such as Philips Healthcare, ASML, NXP and Oce. There are many opportunities for projects.

a) A virtual environment for the development of embedded software

To validate (models of) embedded software it is useful to have a virtual simulation of the hardware that is controlled by the software. The aim of this project is to investigate the construction of such a simulation environment, e.g., using serious gaming techniques. Important is the trade-off between the amount of detail (the realism of the model), the speed of the simulation, and the user needs. As an application of the approach, the goal is a simulation environment for Lego Mindstorms EV3 robots (contact Jozef Hooman).

b) Developing a scheduling analysis tool

The aim is to develop tool support for the analysis of scheduling problems. This includes, for instance, support for the simulation of various scheduling strategies (such rate monotonic, earliest deadline first, etc.) and a graphical representation of schedules using Gantt charts. The aim is to build upon existing tools for simulation and visualization. Main challenge is to develop a tool-independent front end where a user can easily express the problem and which allows a translation to various simulation and analysis tools (contact Jozef Hooman).

c) Internship at ASML

Design an efficient scalable, and robust calculation service for critical dimension metrology (contact Frits Vaandrager).

d) Model checking at NXP

One year ago some students of the Analysis of Embedded Systems course did a successful project in which they applied the nuSMV model checker to analyze a Die-Bonder Strip Glue machine at NXP/ITEC in Nijmegen. I am looking for students interested in an MSc project at NXP, building further on this work (contact Frits Vaandrager).

e) Internship at Embedded Systems Innovation in Eindhoven

Former ICIS student Martijn Hendriks is looking for students that are interested in an intership at ESI/TNO in Eindhoven. ESI/TNO uses system-level performance models to predict the performance of future implementations of the datapath of high-end printers and copiers from Oce. This model-based way of working can lead to shorter development times and improved performance of products. An important step in this process is to validate and calibrate the models that we use. The goal of the assignment is to investigate distance metrics for (sets of) execution traces. Such metrics can contribute to the automated validation and calibration of performance models. The developed metrics are expected to be implemented in a prototype tool. This assignment has a challenging theoretical side, but its results can directly be applied and tested in an industrial setting. For more info see here.

f) Domain-specific language for indoor lighting systems at TNO-ESI and Philips Lighting

The aim of this assignment is to develop a Domain Specific Language for the configuration of indoor lighting systems. It should be possible to analyze configurations, detect inconsistences, and generate Uppaal models to verify robustness. For more information, see the TNO graduation assignment (contact Jozef Hooman).

Go to: Embedded Systems
Go to: Task Oriented and Functional Programming
Go to: Foundations
Go to: Computing Education

Theme II: Model Learning and Model-Based Testing

Within SWS we work on the design of algorithms that allow computers to learn complex state diagrams by providing inputs and observing outputs. This is an exciting new area in which we collaborate with the Digital Security group to learn models of security protocols and also with several companies (e.g. ASML, Philips Healtcare, Oce, FEI and Axini). We refer to the recent review article by Frits Vaandrager in Communications of the ACM for a general introduction. In model-based testing, a model is the starting point for testing. This model expresses precisely and completely what a system under test should do, and should not do, and, consequently, it is a good basis for systematically generating test cases. Automata learning and model-based testing are complementary techniques. In particular, model-based testing can be used to find counterexamples for candidate models produced by automata learning techniques. There are several opportunities for theoretical, tool/programming oriented, and application oriented projects related to model learning and model-based testing:

a) Theory

Did you like the Languages and Automata course and are you interested in theory (e.g. since you do the MFoCS specialization)? Well, we can define dozens of projects on automata learning that build further on the classical theory of languages and automata. How can we make automata learning algorithms more efficient (for instance by integrating learning and testing of hypotheses)? How can we extend Angluin's original L* algorithm for learning finite automata to richer classes of automata that can model data or time? What is the link between nominal automata and register automata? How can we learn complex dynamic behavior incrementally? (contact Frits Vaandrager).

b) Learning Tools

Do you like programming? Within our group we are working on the Tomte learning tool. There are several possibilities for tool oriented projects, especially to link Tomte to other tools, such as model based test tool (in order to establish the correctness of the hypothesis models learned by Tomte), invariant detector tools such as Daikon (to learn models with data), model checking tools (to analyse compositions of learned component models), and other learning tools (in particular passive learning tools to construct models from logs) (contact Frits Vaandrager).

c) Combining Black Box and White Box Learning

Thus far, in Nijmegen, we have concentrated mostly on black box automata learning techniques. Application of such techniques makes sense if we have either no access to the code of the system under learning, or when this code is too complicated to analyze/understand. Recently, interesting progress has been made in the area of white-box learning, e.g. by Botinčan & Babić. Here the goal is to derive abstract models of external behavior directly from the code. The goal of this project would be to investigate, based on a well-chosen case study, scenarios in which black box and white box learning are combined. We have some very nice ideas on how static analysis and techniques from compiler construction can be used to speed up model learning! (contact David Jansen or Frits Vaandrager)

d) Learning Models of Network Protocols

Did you like the Computer Networks (GDN) course and do you find it interesting to understand the details of network protocols? There are many possibilities for projects in which the goal is to learn models of protocol components. We are particularly interested in learning models of security related protocols (like the biometric passport, the EMV protocol and the ABN AMRO edentifier). Here automata learning may help us to find specific sequences of input events that exhibit security vulnerabilities. Last year two students learned models of fragments of TCP and SSH as part of their master's thesis project. There is definitely room for follow-up projects on TCP and SSH, but there numerous other protocols of interest (Skype, SIP, EMV,..)(contact Frits Vaandrager or Erik Poll).

e) Generalizing Algorithms for FSM Conformance Testing to an Ioco Setting

Theorists have developed a powerful set of algorithms for conformance testing of deterministic Finite State Machine models, see e.g. Lee & Yannakakis. Now a problem is that in practice systems often are nondeterministic and do not exhibit the strict alternation of inputs and outputs from the FSM model: sometimes an input is not followed by any output at all, sometimes an input is followed by a series of outputs. Jan Tretmans has developed a popular theory of testing for this general class of systems, but there are still very few algoritms for efficient conformance testing in this general setting. The goal of this project would be to adapt existing algorithms for FSM testing to the Ioco setting of Tretmans. (contact Frits Vaandrager or Jan Tretmans)

f) Model-based concolic testing

Concolic testing is a combination of symbolic and concrete testing applied for white-box testing, which, among others, is successfully used by Microsoft. Model-based testing (MBT) is a promising approach for black-box testing. The goal is to combine these two approaches by applying the concepts of concolic tesing to (ioco-based) MBT, in order to improve test selection for MBT (contact Jan Tretmans).

g) Learning algorithms for non-deterministic systems

Current tools for automata learning are based on deterministic Finite State Machines (FSM; Mealy Machines). Recently, we have published methods for learning non-deterministic labelled transitions systems, combining the FSM learning theory and the IOCO model-based testing theory. The next step is to turn these learning methods into efficient algorithms, to implement them, and to apply them to case studies with non-deterministic behaviour (contact Jan Tretmans).

i) Internship at ASML

Towards a Risk-Assessment Based Approach for Refactoring of Legacy Components (contact Frits Vaandrager)

i) Internship at Oce

Model learning for performance model calibration (contact Frits Vaandrager)

j) Internship at Oce

Validation of inter process communication (contact Frits Vaandrager)

k) Internship at Oce

Recently, we succeeded to apply automata learning to learn a Finite State Machine model of a control that is being used in printers/copiers from Oce. The goal of this project would be to further scale this technique so that it becomes practically useful, for instance to create new implementations of legacy software. The main new idea is to use logs of an existing implementation as a starting point for active learning algorithms. (contact Frits Vaandrager).

l) Internship at Axini

Axini is a spin-off of the University of Twente, based on research by Jan Tretmans and his students on model-based testing. The company specializes in test automation and aims at the application of formal methods in practice. There are several MSc thesis projects available at Axini, see here for more details. In particular subprojects 1, 2 and 7 are interesting (contact Jan Tretmans).

m) Internship at FEI Company

FEI Company develops high-end microscopes. To build control software for electron microscopes, FEI Company is continuously looking for ways to optimize the software engineering process. One of the techniques that could improve the quality and time-to-market is making use of model-based testing. The goal of this internship is to get sufficient knowledge to decide whether FEI should introduce model-based testing in the software engineering process. This includes an overview of available techniques, the selection of a promising candidate, and the application to a case study of FEI (contact Jozef Hooman).

m) Model-based testing with user profiles (ASML)

ASML systems are huge. Scaling MBT to test such large systems is a challenge, in particular the selection of appropriate test cases for testing such systems. One approach is to use operational profiles as the basis for testing, i.e., using users' scenarios as the basis for test selection. The goal is to combine user profile-based testing with ioco model-based testing (contact Jan Tretmans).

0) Model-based testing with domain-specific languages (ASML,Oce)

Many companies like to write models in languages that are specific for their domain, i.e., domain specific languages (DSL). Such models shall then also be used for model-based testing of products developed from these models. Goal is to investigate how DSLs can be used for model-based testing, what requirements shall be imposed on DSLs to facilitate MBT, and to apply these ideas to a DSL from Oce or ASML and ioco-based MBT (contact Jan Tretmans).

p) Minimal counterexamples in Model-based testing

We defined Domain Specific Languages to express logic properties of pure functions and the allowed behaviour of state-based systems. Based on these specifications the automatic test system generates test cases, executes the associated tests, and gives a verdict about the system under test. Current research focuses on finding smaller test cases showing non-conformance, and challenging applications of model-based testing (contact Pieter Koopman).

q) Internships at other companies

In the context of our research on automata learning we are collaborating with several companies that are interested in the potential of this new technique, for instance Collis and Philips Healthcare. Please contact Frits Vaandrager in case you are interested.

Go to: Embedded Systems
Go to: Foundations
Go to: Computing Education

Theme III: Task Oriented and Functional Programming

Task-Oriented Programming (TOP) is a new programming paradigm for developing interactive multi-user systems, developed by our department. This programming paradigm uses "Tasks" as central concept.

The idea is that a computer program supports a human in accomplishing a certain task or it can autonomously perform some task. If we can precisely define the task that needs to be accomplished, or can decompose a complex task into smaller simpler tasks, we should have enough information to generate an executable systems out of the description which supports the work thus described. This means that all technical infrastructure of an interactive system (handling events, storing data, encoding/decoding etc.) can be factored out and can be solved generically.

In other words: a programmer can now focus on the main issue: which task does it need to support, and does not need to worry anymore about the technical details to make it work.

A TOP framework provides the following components:

 - A set of predefined generic "task" definitions;
 - A set of task combinators;
 - A set of predefined generic datasource definitions;
 - A set of datasource combinators;
 - A task excecution engine 

The iTask system we are developing at SWS provides all of these components for the Clean programming language. TOP is offered in Clean as a Embedded Domain Specific Language (EDSL).

a) Applying TOP in industry (master)

The iTask system is used by industry, mainly for rapid prototyping. We cooperate with professional organizations where many different parties work together intensively, accomplishing complex tasks that are critical and need to be performed timely and correctly. These partners currently are:

 - the Netherlands Coast Guard
   they use TOP to prototype their Search and Rescue operations; 
 - TNO
   they use TOP to investigate more efficient ways of working on Navy Vessels
 - the NLDA  (Defense Academy) 
   they use TOP to
   a) improve Command and Control systems
   b) to investigate a general architecture to support mission critical operations.

We are looking for students who want to investigate the applicability of TOP in one of those industrial environments.

b) TOP Tools (research b, master)

In TOP one can describe arbitrary complex collaborations. But does the resulting system behave as intended ? How can we specify that? How can we measure that? How can we simulate the behaviour and test what is going on?

c) TOP Parallel Tasks (research b, master)

Our processors do not seem to become much faster anymore. Instead we increase the number of processors heading to a multi-core system. So, why not go for parallel evaluation of TOP tasks as well? This seems obvious, but it is not trivial at all, because task are reactive, and they share information with others. The semantics do not yet take parallel evaluation of tasks into account and assume that task rewriting is an indivisual action. We are looking for students who want to study how tasks can be evaluated in parallel, from a theoretical and/or from a practical point of view.

d) TOP Games (research b, master)

Playing games is a good example of a collaboration on the net. We have developed a small number of turn-based, n-person games in TOP (tic-tac-toe, trax, Ligretto) to investigate the suitability of the novel paradigm. There are of course many more kinds of games. In this project we investigate these classes of games and study how they can be structured best according to the TOP paradigm.

e) TOP Drones (research b, new devices lab, master),

Drones and other unmanned autonomous systems, e.g. robots are now affordable for mass markets. "Tasks" are a common notion in robotics. We would like to know how this task notion relates to the concepts of the Task-Oriented Programming paradigm and if TOP would be a suitable paradigm for programming drones. You can pursue this problem theoretically by literature study and further formalization of the concepts in both field,s or take a more hands-on practical approach. In that case we have a very hackable consumer drone (http://ardrone2.parrot.com/) available for your experiments.

f) TOP in Haskell? (research a/b, master)

Currently, TOP is only available for Clean, although there is limited implementation in Erlang. It is not easy to make an implementation of TOP. The Clean implementation makes use of very advanced techniques, such as dynamics, generic programming, and the like. While Haskell and Clean are very similar, there are differences in the advanced language features they support. We are looking for an MSc student who wants to research the differences and similarities between Clean and Haskell in the context of iTasks. We are looking to answer the following research questions:

  •  Which parts of iTasks cannot be implemented in GHC and why?
  •  Which parts of iTasks can be implemented in GHC with an alternative approach?

g) TOP IDE (research b, master)

New paradigms always lead to new insights of old problems. This also holds for TOP. In this project we investigate how TOP programming can improve on the structure of Integrated Development Environments (IDEs). IDEs are an interesting case study because they merge several heterogeneous technologies into one system: external applications need to be coordinated (compilers, type-checkers, parsers, file system managing functions), they are typically interactive (GUI), and there is the opportunity to distribute work (multi-user, version control).

h) TOP Graphical User Interfaces (research a/b, master)

Many scientific proposals have been developed to write Graphical User Interfaces in functional programming languages. The level of abstraction and composition of these approaches vary a lot. For Task Oriented Programming we are developing a new approach that is compositional and abstract. In a (bachelor or master thesis) project you can perform a literature study and compare and analyse these approaches, or design and implement a new approach that merges seamlessly within Task Oriented Programming.

i) TOP and Dependent Types (master)

Arbitrary complex collaborations can be expressed in TOP. The disadvantage of such expressive power is that it is not easy to state properties of tasks that can be verified statically. On the other hand, type systems are becoming more powerful as well. Haskell has recently been extended with a restricted version of dependent types. Can such dependent types be of any help for stating static properties of TOP tasks?

j) TOP using New Devices (research b, new devices lab, master)

New devices like smart phones and tablets have gained a large popularity in a few years and have an enormous impact on our life. However, this is only the start. There is a large number of gadgets under development; wearable devices (like smart glasses and smart watches), drones, 3D-printers and ordinary devices like light bulbs and thermostats connected via the internet to other devices. In TOP one can, in principle, specify the collaboration between devices without worrying about graphical user interfaces, communication, and collaborative behaviour. We want to study how TOP can be used for developing collaborative applications on new devices like Google glass, smart watches, drones, and the like.

k) Testing and Education (research b, master)

The bachelor course Functional Programming consists of quite a large bundle of exercises. To aid the student-assistents in marking student-submissions, we have developed test-suites for a part of these exercises. The test-suites are executed by the Gast test tool. There are several interesting research questions in this project. Here are some of them. At what stage can the test-tool be offered to students when working on their assignments? Does the availability and use of the test-tool improve the quality of their submissions? How can the test-tool best be integrated in the programming IDE that is used?

l) Medical innovation: the Anatomy Projector

In this project you will be responsible for reworking and assessing the software driving a medical innovation, developed in Radboudumc. With the Anatomy Projector (patent pending), patient-specific anatomical structures are displayed right onto the patient. Regardless of the movement of the projector, the projected image is always aligned to the patient, ready for surgical reference. A working prototype utilizing augmented reality techniques will be miniaturized using a 64-bit high performance single board computer. We need you to evaluate the safety and investigate the performance in terms of accuracy and speed on software level after porting the existing code towards this brand new mini-computer. This assignment consists of a theoretical side, but mainly focusses on code implementation and safety assessments.

Are you up for a challenging assignment which will make a significant impact on healthcare?
Contact: Stefan Hummelink (Radboud UMC), stefan.hummelink@radboudumc.nl and Peter Lucas (ICIS)

Go to: Embedded Systems
Go to: Automata Learning and Model-Based Testing
Go to: Computing Education

Theme IV: Foundations

a) Extensions of the Turing machine or lambda calculus paradigm with "interaction", "advice" or "oracles" (BaSc, MaSc)">

There is (a lot of) work on this by, for example, van Leeuwen - Wiederman, Baeten - Luttik van Tilburg, Wegner, Goldin, Smolka and many others. Many questions on the universality of these models and their relative strength remain. A bachelor thesis on this topic has been written by Rick Erkens, but there is still a lot to be studied!

Contact: Geuvers; stream: General Theory

b) Operational Semantics and/or Axiomatics Semantics for programming languages or programming paradgigms (BaSc)

Give a semantics to a programming language you like, or a programming paradigm, as you've studied in "Semantiek en Correctheid" or "Berekeningsmodellen". AN example is the bachelor thesis of Tom Nikken who studied iTasks, supervised by Peter Achten and Herman Geuvers

Contact: Geuvers; stream: General Theory

c) Grammars and Automata extended: XML standards, Parsing Expression Grammars, ... (BaSc, MaSc)

What you've seen in "Talen en Automaten" for string languages can be extended in various ways:

  1. To tree languages and hedge languages. These are ways to capture XML languages, which have context-free aspects like the requirement for matching brackets, but for the rest are pretty regular. Examplaric questions: how can we (automatically) learn an XML grammar from a collection of sample XML documents? A bachelor thesis on this topic has been written by Tom Evers, but there is still a lot to be looked into!
  2. Parsing Expression Grammar (PEG) is a formalism that captures more than the regular languages with a special focus on obtaining simple parsing algorithms. PEGs are used a lot.

Contact: Geuvers; stream: General Theory

d) Natural Deduction rules from truth tables (BaSc/MSc)

Together with Tonny Hurkens I have been working on finding a general procedure to derive deduction rules from truth tables, both for classical and for constructive logic. We have written a paper on this, but lots of questions remain and lots of generalizations are still possible, e.g. how to describe cut-elimination (proof reduction) in general, how to define a general proof term interpretation, how to extend this to predicate logic, ...

Contact: Geuvers; stream: Logic and Type Theory

e) Type systems for Classical Logic (BaSc/MSc)

The Curry-Howard formulas-as-types isomorphism is between proofs in constructive logic and typed lambda terms, which can be viewed as functional programs. Extending this isomorphism to classical logic, we can interpret classical proofs as "functional programs with jumps", which can me modeled as escape or call-cc constructs. There has been done a lot of work on the extension of formulas-as-types to classical logic, giving rise to new presentations of the logical systems, new typed lambda calculi, new reduction rules and new reduction strategies. In all these areas (or in the combination of them) there are still many open questions left.

Contact: Geuvers; stream: Type Theory

f) Homotopy Type Theory and Higher Inductive Types (MSc)

The latest advancement in type theory is HoTT (Homotopy Type Theory), which aims at giving a "Univalent Foundations of Mathematics". It also aims at giving a better treatment of various fundamental concepts from computer science, like "data types with laws" and "equivalence of data". An important concept to work with these are so called "higher inductive types". A preliminary treatment of what these are is given here, but there is still a lot to be studied!

Contact: Geuvers; stream: Type Theory

g) Fixed point combinator and Russell's paradox in inconsistent type theories (MSc)

In inconsistent type theories like lambdaU, lambdaU- or lambda*, we can find a closed term of type bot. We can also find a closed term of the type of the fixed point combinator: forall A:*, (A->A)->A. It is know that we can make a looping combinator of this type, but it is an open question whether there is a fixed point combinator of this type. (It is generally believed there is none.) Also, the proof of bot can be given in various ways, usually by interpreting some form of Burali-Forti paradox, saying that we cannot well-order the collection of all well-orderings. But can we also interpret the much simpler Russell paradox? (Stating that the collection of all sets that are not members of themselves is not a set.)

Contact: Geuvers; stream: Type Theory

h) Partial recursive functions as functions over streams in Coq (MSc)

We propose the following method for defining the partial recursive functions (over natural numbers) as functions from streams to streams.

We consider streams over {0,S,c}, where c denotes a computation step (a tau step in process algebra terminology), so the number n is represented by c...c S c...c S ... S c... c 0 ..., so n times a S followed by a 0, separated by c and followed by arbitrary what. A function takes streams and produces a stream; given a (code for a) partial recursive function f of arity m we define its interpretation as a function (in Coq) taking n streams and producing a stream. All this should be defined in Coq

Now we want to prove formally (in Coq), that the set of total functions is not enumerable: there is no function g that takes a (code of a) function f and returns true if f is total and false if f is not.

Contact: Geuvers; stream: Formalization

i) 100 prisoners and one light bulb problem (MSc)

Formalize the 100 prisoners and one light bulb problem; look here for details. There are two goals that you want a solution to have:

  1. When it terminates you want to be certain all prisoners have been in the room.
  2. Have a low expected running time.

The algorithms can get pretty complex so that computing the expected running time becomes extremely difficult. So people write simulations to estimate the expected running times to compute (2). However the simulations don't provide confidence in (1). Any broken algorithm that takes several (simulated) years to run will work in every tested case with near certainty, but the goal is to have absolute certainty. A proof is required for (1). This is a nice example of a problem where you want to write an algorithm, and prove it's correctness (1). Then extract the algorithm and execute it to estimate (2), and compare the solutions to the puzzle.

Conact Geuvers; stream: Formalization

j) Formalize a proof of correctness of an algorithm for finding zeroes of complex polynomials

We had the FTA project, in which we formalized the proof of the Fundamental Theorem of Algebra. However, we never got a reasonable algorithm out of that. It would be very nice to go the other way, start with an algorithm for finding zeroes of complex polynomials, and then formalize a proof of its correctness in the CoRN framework (possibly using that we already know that FTA holds, if necessary). The algorithm I would look at first for this would be the Jenkins–Traub algorithm. Wikipedia states that that is "practically a standard in black-box polynomial root-finders".

Contact: Wiedijk; stream: General Theory

k) Rewrite a core part of the Kenzo system in a typed language like OCaml or Haskell

The correctness of the Kenzo computer algebra software has been a hot topic for a while now. Generally people want either to (a) formalize the algebraic topology constructively and hoping a reasonable executable program would flow from that, or (b) formalize the correctness of fragments of the Lisp code of the Kenzo program. I think both approaches are misguided: you really want to prove an _explicit_ program correct, but if you do this in type theory that program really should be written in a typed functional programming language and not in Lisp. So, the project would be to take a core part of the Kenzo system and rewrite that in a typed language like OCaml or Haskell. And then the next stage (if there was time for it) would be to "port" that to Coq and prove properties of it.

Contact: Wiedijk; stream: Proof Assistant Technology

l) Formalization of parts of the C standard

In our project to write a formal version of the C standard there are parts that can be easily isolated. Examples are the C preprocessor, or the floating point arithmetic. It would be great if a student would write a formalization of such a part of the standard as a Bachelor's or Master's thesis project. Another possible project related to the Formalin project would be to define a framework for comparing different versions of C. Yet another possibility would be to do a "mini-Formalin", by writing a semantics for the very small specific subset of C called MISRA C (a restricted C standard for the automobile industry.)

Contact: Wiedijk; stream: Formalization

m) Compare/stress-test several Hoare-logic-based systems by formalizing the correctness of a small example in them

There are various systems for proving imperative programs correct, mostly based on Hoare logic (or nowadays on its extension separation logic.) A nice project might be to either "compare" those systems by formalizing the correctness of a small example in a couple of them. Or alternatively, to "stress test" one of these systems by formalizing the correctness of a not-so-small example in it.

Contact: Wiedijk; stream: Formalization

n) Properties of type theories

There still is a (to me) interesting unfinished work by Herman Geuvers and me: we had a paper about "explicit conversions" in which we defined two type theories, that we called lambda-H and lambda-F. So proving the main theorem for lambda-H was much easier than for lambda-F, and therefore we (= Herman) only did that. However, I think that lambda-F is a nicer system, so the project would be to prove the analogous theorems for the lambda-F system. Alternatively, the project could be to look into other variations of these systems, and to prove properties of _those_ type theories.

Contact:Wiedijk; stream: Type Theory

o) de Bruijn's and de Groote's systems

Something that _I_ would like very much: de Bruijn, who was a genius, once defined a system called Lambda-Delta (or Delta-Lambda, depending on the paper that you look at :-)) However, this was before type theory got its present form, and therefore these systems do not resemble the current systems at all. Now Philippe de Groote once had a paper in which he gave a "more modern" version of these systems, but that was rather skimpy. It would be a nice project to (a) explicitly prove that the system by de Bruijn indeed corresponds to the system by de Groote and (b) to implement this system efficiently. Also, it would be very interesting to relate this system to the modern type theories (it's quite close to LF aka lambda-P), or to create a variant that is closer to the more modern systems.

Contact: Wiedijk; stream: Type Theory

p) Deskolemization

The automated theorem provers like Otter, Prover 9, Vampire and the E prover (random examples) can prove first order formulas automatically. However, they use skolemization, and therefore they don't give you a proof of the _original_ formula. Of course the proof that they _do_ give can be transformed into a proof of the original formula, but I don't think that that was ever implemented in general. There's _some_ work about this by Hans de Nivelle, but that really deskolemizes the proofs that _his_ theorem prover Bliksem finds, and not arbitrary proofs of the skolemized formula. Also, I don't think there's a generic implementation of this that works on proofs in a common proof format (like for example Ivy or the TPTP proof format.) This would be very nice to have implemented, even if it is only a proof of concept (the proofs probably would blow up exponentially, so I don't know whether it would be too useful in practice.)

Contact: Wiedijk; stream: Automated Reasoning

q) Proof transformers for HOL Light

There's the declarative miz3 proof language for HOL Light. There are proof transformers for that framework that can be implemented, analogous to the "relprem" and "relinfer" tools of the Mizar system. Working on a tool like that would make a great master's thesis project.

Contact: Wiedijk; stream: Proof Assistant Technology

r) Add the "Poincare principle" to HOL">

It would be very nice to add a "Poincare principle" to HOL, i.e., to add a way to HOL to "import" information from the execution of a program as a theorem of the system. This might be essential to get the Flyspeck project to a close. The main things to work on is the specific "execution model" that you want this extension of HOL to be about, and the other (more important and more difficult) is to have this in a way that you actually can reasonably easily prove things about the programs. Because you do want to get meaningful information out of the fact that the execution of a program gave a certain result.

Contact: Wiedijk; stream: Proof Assistant Technology

s) Minimization of automata (MSc/Bsc)

Minimization of deterministic automata is an important and classical problem. Many possible constructions and algorithms exist, and it is still not always clear which performs well in practice, and what the fundamental connections are. There are lots of exciting possible projects in this area - on understanding theoretical connections between various approaches, on experimental evaluation, or on extension of existing approaches to other models of computation.

Contact: Jurriaan Rot

t) Weak equivalence of systems (MSc)

Deciding equivalence of models of computation is a fundamental problem, underlying many problems and techniques in the analysis and verification of programs. Among the various notions of equivalence, weak bisimilarity (and variants) practically very relevant, and is in use in various verification tools. Weak bisimilarity abstracts away from interal computation steps, so that two programs are equivalent if they have the same observable behaviour. The aim of this project is to develop proof techniques for weak bisimilarity of potentially infinite state programs.

Contact: Jurriaan Rot

u) Regular expressions and pointer programs (MSc)

Pointer programs are notoriously difficult to analyze. One of the basic questions is reachability: is a variable x reachable from some pointer y (potentially via various steps) at some point in the program? The idea of this project is to approach the problem of reachability using regular expressions with tests.

Contact: Jurriaan Rot

v)Deciding properties of stream programs (BSc/MSc)

In this project, we will look at programs that operate on streams (infinite sequences). Depending on the available program constructs, is it possible to prove automatically that two programs behave the same? Or to prove, for instance, that a given program always returns an increasing stream?

Contact: Jurriaan Rot

Go to: Embedded Systems
Go to: Automata Learning and Model-Based Testing
Go to: Task Oriented and Functional Programming
Go to: Foundations

Theme V: Computing Education

In this theme we investigate teaching and learning aspects of computing science (in Dutch: ‘vakdidactiek informatica’). The topic is regarded in a broad sense, including digital literacy, for example. Our projects are carried out in various contexts, ranging from secondary education (schools) to higher education (universities). We collaborate with researchers in other Dutch universities (including OU, RUG, TU/e, TUD) and with groups abroad. We list some example projects below. (Contact Erik Barendsen or Sjaak Smetsers)

a) Programming education

Programming is considered a difficult skill, and many researchers have studied learning difficulties and possible instruction strategies. Current research activities focus on: learning object oriented programming, tools for visualization, and feed-back on code quality.

b) Digital literacy and computational thinking

Digital literacy involves knowledge, attitude, skills and behavior, see for example a recent advice by the Royal Academy of Sciences (KNAW). There is consensus about the importance of digital literacy in present-day society, but too little is known about adequate ways to develop it. An important aspect of digital literacy is computational thinking, a set of analytical skills necessary to recognize computing aspects in real-life problems, reformulating those problems in terms of information processing, and solving them with use of computers.

c) Development and testing of teaching materials

There is a great need for development and improvement of teaching modules, especially for secondary education. The department has already contributed to modules about cyber security, robotics, object oriented programming, and model checking.

Go to: Embedded Systems
Go to: Automata Learning and Model-Based Testing
Go to: Task Oriented and Functional Programming
Go to: Model Based Reasoning
Go to: Computing Education