Table of Contents
|SATURDAY 11 SEPTEMBER|
|11:30||Opening of VODCA|
|Session: SECURE ARCHITECTURES|
|11:40 - 12:25||Invited Talk by Jan
Jürjens; "Foundations for Designing Secure Architectures"
Developing security-critical systems is difficult and there are many well-known examples of security weaknesses exploited in practice. In particular, so far little research has been performed on the soundly based design of secure architectures, which would be urgently needed to develop secure systems reliably and efficiently. In this abstract, we sketch some research on a sound methodology supporting secure architecture design. We give an overview over an extension of UML, called UMLsec, that allows expressing security-relevant information within the diagrams in an architectural design specification. We define foundations for secure architectural design patterns. We present tool-support which has been developed for the UMLsec secure architecture approach.
|12:25 - 12:50||"A Web Service
Architecture for Enforcing Access Control Policies". Claudio Agostino
Ardagna, Ernesto Damiani, Sabrina De Capitani di Vimercati, and Pierangela
represent a challenge and an opportunity for organizations wishing to
expose product and services offerings through the Internet. The Web
service technology provides an environment in which service providers and
consumers can discover each other and conduct business transactions
through the exchange of XML-based documents. However, any organization
using XML and Web Services must ensure that only the right users, sending
the appropriate XML content, can access their Web Services. Access control
policy specification for controlling access to Web services is then
becoming an emergent research area due to the rapid development of Web
services in modern economy. This paper is an effort to understand the
basic concepts for securing Web services and the requirements for
implementing secure Web services. We describe the design and
implementation of a Web service architecture for enforcing access control
policies, the overall rationale and some specific choices of our design
Web services represent a challenge and an opportunity for organizations wishing to expose product and services offerings through the Internet. The Web service technology provides an environment in which service providers and consumers can discover each other and conduct business transactions through the exchange of XML-based documents. However, any organization using XML and Web Services must ensure that only the right users, sending the appropriate XML content, can access their Web Services. Access control policy specification for controlling access to Web services is then becoming an emergent research area due to the rapid development of Web services in modern economy. This paper is an effort to understand the basic concepts for securing Web services and the requirements for implementing secure Web services. We describe the design and implementation of a Web service architecture for enforcing access control policies, the overall rationale and some specific choices of our design are discussed.
|12:50 - 13:15||"An MDA Approach
to Access Control Specifications Using MOF and UML Profiles". Torsten
Fink, Manuel Koch, and Karl Pauls
We present a Model Driven Development (MDD) approach to the development of access control policies for distributed systems. The models are expressed as Meta-Object Facility (MOF) models enriched by Unified Modeling Language (UML) profiles. The view-based access control model is used as an example, for which we present a platform independent meta-model and platform specific meta-models for the Java 2 Platform, Enterprise Edition (J2EE). A management application is used to build instance models for the platform independent and platform specific meta-models, respectively. We present in this paper how the platform independent models can be used to generate the platform specific models, and how the meta-models can be used to generate the models for the specific application. Finally, the platform specific models are used to generate the security policy to be deployed in the security infrastructure. We show how consistence requirements can be verified formally by using category-based graph transformations.
|Session: SECURITY ANALYSIS|
|14:45 - 15:30||Invited Talk by Jan
Münther; "On the Security of Security Software".
Currently, security appears to be one of the strongest sales arguments for software vendors all over the world. No other sector of the software industry has undergone a similar wave of mergers and acquisitions recently as the producers of security software. Market analyses from all leading business consultants predict heavy growth in the field, and the annual figures of the major players such as Checkpoint or Symantec back up these statements. However, the main mechanisms of the industry still apply: Innovations have to be created and presented where demand is predicted and the pressure to come up with new solutions is at an all time high. Consequently, the products are pushed out the door as quickly as possible---and all too often before quality control has been dedicated the amount of time and effort that would have been implied by due diligence. While any customer of security software expects the product to enhance the security of their environment, it might actual pose new risks. Blind faith in these solutions can further contribute to compromising the overall security of a network where a substantial advantage had been expected. The purpose of this article is to shed a little light on the situation as it really is.
|15:30 - 15:50||"Compositionality
of Security Protocols: A Research Agenda". Cas Cremers
The application of formal methods to security protocol analysis has been extensively researched during the last 25 years. Several formalisms and (semi-)automatic tools for the verification of security protocols have been developed. However, their applicability is limited to relatively small protocols that run in isolation. Many of the protocols that are in use today cannot be verified using these methods. One of the main reasons for this is that these protocols are composed of several sub- protocols. Such a composition of protocols is not addressed in the majority of formalisms. In this paper we identify a number of issues that are relevant to applying formal methods to the problem of security protocol composition. Additionally, we describe what research needs to be done to meet this challenge.
|15:50 - 16:15||"Modelling a
Secure Agent with Team Automata". Lavinia Egidi and Marinella
We use Team Automata in order to model a protocol by Cachin et al. for securing agents in a hostile environment. Our study focuses on privacy properties of the agents. We use the framework to prove a result from the work by Cachin et al. As a by-product, our analysis gives some initial insight on the limits of the protocol. From a different perspective, this study continues a line of research on the expressive power and modelling capabilities of Team Automata. To the best of our knowledge, this is the first attempt to use Team Automata for the analysis of privacy properties.
|Session: INFORMATION MANAGEMENT - FORMAL METHODS|
|chair:||Rebeca Diaz Redondo|
|16:45 - 17:10||"An Observational
Model for Spatial Logics". Emilio Tuosto and Hugo Torres Vieira
Spatiality is an important aspect of distributed systems because their computations depend both on the dynamic behaviour and on the structure of their components. Spatial logics have been proposed as the formal device for expressing spatial properties of systems. We define CCS ||, a CCS-like calculus whose semantics allows one to observe spatial aspects of systems on the top of which we define models of the spatial logic. Our alternative definition of models is proved equivalent to the standard one. Furthermore, logical equivalence is characterized in terms of the bisimilarity of CCS ||.
|17:10 - 17:35||"A Federated Layer
to Integrate Heterogeneous Knowledge". Agustina Buccella, Alejandra
Cechich, and Nieves R. Brisaboa
The way developers define architecture, execute architectural strategy, and record the results make a critical difference in the ability to deal with information and knowledge. In this context, integrating databases is very important indeed, but the different semantics they possibly have usually complicates administration. Therefore, recovering information through a common semantics becomes crucial in order to realise the full knowledge contained in the databases. In this paper, we describe and illustrate a proposal on the use of layered architectures to integrate knowledge from heterogeneous sources. We illustrate how the process might be facilitated by applying ontology-based comparisons as part of the components' behaviour.
|17:35 - 18:00||"A Logic for
Graphs with QoS". Gianluigi Ferrari and Alberto Lluch-Lafuente
We introduce a simple graph logic that supports specification of Quality of Service (QoS) properties of applications. The idea is that we are not only interested in representing whether two sites are connected, but we want to express the QoS level of the connection. The evaluation of a formula in the graph logic is a value of a suitable algebraic structure, a c-semiring, representing the QoS level of the formula and not just a boolean value expressing whether or not the formula holds. We present some examples and briefly discuss the expressiveness and complexity of our logic.
|SUNDAY 12 SEPTEMBER|
|Session: INFORMATION MANAGEMENT - INNOVATIVE TECHNIQUES|
|chair:||Maurice ter Beek|
|9:00 - 10:00||Invited Talk by Raymond McGowan and Catharina Candolin; "Complexity in Real World Architectures"|
|10:00 - 10:25||"Distributed
Backup through Information Dispersal". Giampaolo Bella, Costantino Pistagna, and Salvatore Riccobene
The formal aspects underlying a novel distributed backup service are discussed. Strength and originality of the service lie in the combined adoption of an established information dispersal algorithm with a simplified version of an existing location service. Information dispersal makes our service threshold-secure in that the backup owner only needs participation of a pre-established threshold number of nodes to recompose a distributed backup. This means that the service is highly available as it tolerates a number of node breakdowns. Even the right threshold number of nodes cannot retrieve the backup on their own initiative. The location service adopted allows our service to work over non-organized, flat networks. Indirect advantages are the optimization of the total redundancy of data and the efficient management of resources. Our service has reached the stage of a proof-of-concept implementation.
|10:25 - 10:50||"Fast and Flexible
Compression for Web Search Engines". Antonio Farina, Nieves R.
Brisaboa, Cristina Paris, and Jose R. Parama Gabia
In this paper we present the adaptation of a compression technique, specially designed to compress large textual databases, to the peculiarities of web search engines. The (s,c)-Dense Code belongs to a new category of compression techniques that allows fast and flexible search directly on compressed files. However these methods are only suitable for large natural texts containing at least 1 megabyte, otherwise they would not achieve an attractive amount of compression. In order to take advantage of the search capabilities of these techniques (they allow searches on compressed files up to eight times faster than searching on the plain versions, we present a modification of the basic compression technique (s,c)-Dense Code to achieve reasonable compression ratios with small files, a requirement when we work with search engines.
|Session: SECURE NETWORKING|
|11:10 - 11:55||Invited Talk by
Stefano Bistarelli; "Soft Constraints for Security"
Integrity policies and security protocols are usually defined by set of rules describing which actions have or have not to be performed. We describe an uniform framework to represent the security properties of integrity and confidentiality/authentication by using the the constraint satisfaction framework. We show how the crucial goals of integrity and confidentiality/authentication may be achieved in various forms, each of different strength. Using soft (rather than crisp) constraints, we develop a uniform formal notion for the two goals. They are no longer formalised as mere yes/no properties as in the existing literature, but gain an extra parameter, the security level. For example, different messages can enjoy different levels of confidentiality, or a principal can achieve different levels of authentication with different principals, or a system can ensure a different integrity level.
|11:55 - 12:15||"On Mobile Agents
Resistance to Traffic Analysis". Kamil Kulesza, Zbigniew Kotulski, and
This paper will concern itself with a formulation of traffic analysis problem for mobile agents. It is an interesting theoretical problem as well as a critical feature when using agents on a massive scale in decision making systems. The decision making systems are applied to demanding and complex environments such as stock markets. The mobile agents used are a natural targets for attacks, because they provide information for decision making. The resulting information can have a value measured in millions of dollars and information of such a high value attracts potential attacks. An efficient way to attack the user of decision making systems is to learn her strategy and respond in kind. In this respect even passive observation of agents can provide useful data, namely what information they are gathering. A common first defense is to provide anonymity for mobile agents. However, what happens when anonymity is gone? What information then becomes available and what steps will the user take? Yet, the problem has not been previously formulated for such a framework. We formulate it in terms of various factors used for traffic analysis. These factors originate from different side channels that provide information on the operating agents. At the end we state a paradox, which links an excessive use of countermeasures against traffic analysis with weakening system security.
|12:15 - 12:35||"On the Use of
Singular Value Decomposition for a Fast Intrusion Detection System".
Sanjay Rawat, Arun K. Pujari, and V. P. Gulati
Traditionally, application of Data Mining in Intrusion detection systems (IDS) concentrates on the construction of operational IDSs. The main emphasis is on data mining steps, and other KDD (Knowledge Discovery in Databases) are largely ignored. The present study investigates the applicability of Spectral Analysis technique - singular value decomposition (SVD) as a preprocessing step to reduce the dimensionality of the data. This reduction highlights the most prominent features in the data by removing the noise. This preprocessing step not only makes the data noise-free, but also reduces the dimensionality of the data, thereby minimizes computational time. We perform experiments on various data sets like DARPA'98, UNM sendmail, inetd, and login-ps data sets to show that reduction in the dimension of the data does not degrade the performance of the IDS. In fact, in case of single application monitoring like sendmail, by applying reduction technique we get very encouraging results.
|12:35 - 13:00||"Formal Security
Analysis for Ad-Hoc Networks". Sebastian Nanz and Chris Hankin
In ad-hoc networks, autonomous wireless nodes can communicate by forwarding messages for each other. For routing protocols in this setting, it is known that a malicious node can perform a variety of attacks just by not behaving according to the specification. Whilst secure versions of routing protocols are under development, little effort has been made to formalise the scenario similarly to developments in the realm of traditional security protocols for secrecy and authentication. We present a broadcast process calculus suitable to describe the behaviour of protocols which require a local memory component for every node. By adding annotations for the origin of messages, we are able to formalise a vital security property in this context, called store authorisation. Furthermore, we describe a static analysis for the detection of violations of this property. For a model of the AODV protocol in our calculus, we are then able to deduce that an attacker may introduce a routing loop in certain networks.
|13:00||Closing of VODCA|