You are here

The Future Web: A Systems Design Perspective

Athens.
The second keynote this evening is by Joseph Sifakis, who begins by taking us through a quick tour of the history of informatics - starting with Turing and Gödel in 1936 and moving through information systems, graphic interfaces, the emergence of the Web and the information society, and leading today to the increasing embedding of computing systems into all kinds of technologies. In the future, we'll see further developments in computing power and storage capacities, and this means that computing systems will be literally everywhere. In developed countries, in fact, a person already uses some 250 different processors per day - processors in cars, computing equipment, home entertainment, telecommunication systems, and so on.

This poses a significant challenge for systems design. Embedded systems interact continually with their environment - they need to be reactive within set parameters, they need to be automonous without requiring human intervention, they need to be robust, and they need to be scalable; and additionally, economic constraints also need to be taken into account in systems design, of course. This, then, is a technological challenge for systems designers. In descending order of difficulty, some such systems need to be safety-critical (they must not fail), security-critical (they must not be able to be hacked), mission-critical, (they must be continuously available), or they must simply provide a best-effort solution; further, some systesm are also business-critical - their failure to operate as advertised may severely affect the sustainability of the manufacturer.

All Internet infrastructure is mission-critical, for example; at high cost, we also manage to design safety- or security-critical systems; and we have complex best-effort systems in telecommunication and Web-based applications. What we need are affordable safety- or security-critical systems in transport, health, and energy management, as well as a successful integration of heterogeneous systems of systems - for example to support the convergence of Internet and embedded systems in the Internet of Things, in automated highway systems, or in the next generation of traffic control systems.

Physics-based disciplines build on testable theories as they design their artefacts; the same is not true for informatics: the design of IT systems is a risky undertaking that requires a large number of designers over several years. The task is complex, requirements are often incomplete and ambiguously formulated, and design approaches are empirical and reliant on the expertise and experience of teams. Therefore, large IT projects often come in over budget, and some 40% of all IT projects fail.

Such projects have both functional requirements (the services needed to be delivered) and extrafunctional requirements (the relative quality of a service) - such as performance, security, and safety. An IT system is 'correct' when it meets its requirements, and this correctness can only be checked by testing (of physical prototypes or virtual models). Virtual models, in turn, can be tested using ad hoc models or formal models of the system, and only the formal model-based form of testing achieves what Joseph calls exhaustivity. Formal models need to be faithful to the real system, and should be generated automatically from the system descriptions. This is relatively easily achievable for hardware, but much more difficult for software or for mixed software/hardware systems - they run into what is called state explosion, where the number of possible states that a model may be in has increased exponentially to a nearly infinite number, making exhaustive testing very difficult.

Embedded systems, in particular, marry physicality and computation, and therefore require knowledge of both hardware design and control design; this problem becomes even more pronounced when we move into the area of interoperability between multiple embedded systems. What needs to happen here is a move from a posteriori verification of correct functioning to assembling the overall system from a number of verified individual components, so that the composite system can be relied upon to function as required.

Security in Internet-based systems, as another example, requires confidentiality, integrity, availability, and authenticity, for example, and to achieve them, more R&D work is needed to achieve security by design rather than security by obscurity; there is a need to enhance transparency in systems, both in the infrastructure and in software applications. There is a need to enforce compliance to standards.

Finally, the Internet of Things as a convergence between embedded systems and the Internet requires major breakthroughs in wired sensor networks, advances in miniaturisation, IP protocol usage by small devices, standards for naming objects, tools, and platforms, and improved security and reliability of the Internet. This may lead to an evolution towards a number of specific critical Internets, such as smart grids or intelligent transport systems.

What remains important in all of this is the need to guarantee the freedom of the Internet. Free services on the Net at present depend on specific business models, and such models need to be supported and protected, for example by government regulation or subsidy. Similarly, it remains important to defend civil liberties and privacy rights. The Internet of Things may pose a turning point: the opacity of the existing infrastructure and its unmanageable complexity may make it necessary to develop a variety of domain-specific Internets.

Technorati : , , , , ,
Del.icio.us : , , , , ,