As you may have noticed from my writings, the main topic of this blog is Digital Infrastructures for (networked) computer applications. These infrastructures are the platforms on which computer applications are delivered to users, so that they can interact with each other.
What are the major research questions around these digital infrastructures? How is the future going to look? Where are we going to hit the wall when scaling out these infrastructures?
In the future we will see a growing number of terminals, a growing number of applications, a growing amount of data, and rapidly changing relative economics of the components that make up digital infrastructures. Earlier I described some of the technical progression in the period 2001-2005. The speed of this progress may change a little, but there is no sign of it coming to a full stop.
The ‘terminal devices’ are now predominantly PCs, but this will expand into entertainment devices such as TVs, game boxes, and more appliance style devices. Terminals for humans may become a minority as sensors ranging from surveillance cameras, environmental sensors up to RFID tags will become the dominant category of terminals. It is realistic to assume that surveillance cameras will replace the web as the dominant source of bandwidth consumption.
There will be a growing number of applications accessible through these terminal devices. An application is roughly a data model plus a certain number of functions operating on that. Networked applications typically live on servers, but will have a client component. A word processor is an application. Its data model is formatted text, and the functionality allows you to manipulate that text. Another example would be gmail, whose data model is a set of messages, and the functionality allows you to manipulate the messages and their flow.
We can expect a growing amount of data in the network. The reasons for this growth are manifold. Increasing numbers of users devote larger and larger fractions of their time to contribute their history. Images eat up storage space much quicker than text, and this is even more so for video.
Digital infrastructures are made up of processors, storage and networking. The costs of these change rapidly, but at different rates. Memory is becoming cheaper a lot faster than processing, and network bandwidth prices are dropping slower than that. The cost of humans to manage this technology is dropping slowest, if at all. What this means is that there will be continuing shifts in what the optimal design is for networked computing.
Now for the research questions. How fast is it actually changing? How do we organize this, technically and people wise, in order to control the cost, improve the quality, and speed up the change and development process?
For example, technology development in this area is very much structured by standards. This allows people to collaborate on the design of system components so that those components then later work together. These standards can be set by a vendor for example, or agreed in a standardization committee. What is the dynamics of that?
In which way can we best organize the development and in particular the management of large digital infrastructures?
When I reviewed the predictions one conclusion was that the uptake of technology is not as rapid as can be expected. This is in line with experiences in my neighborhood.
One example that I am currently involved in is IPv6. What are its advantages, and how will the change to IPv6 occur, if at all? An extensive discussion on that can be found here.