4.6
Open Design

Thoughts on software and the representation of movement

Mark Meagher

As any student of architecture soon discovers, discussion of an architectural project necessitates some means of representation, all the more so when movement through it is being considered. As will already be evident from the varied contributions in this book, the experience of moving through an existing work of architecture or landscape is influenced by a great diversity of factors that include, but are not limited to, cues found in the physical environment. All the senses are involved in this experience, along with a range of tacit and explicit influences, such as the time of year, the weather and the company of those who share it. As an entirety, the experience remains beyond description, for all attempts to communicate it are selective, each medium having only the capacity to capture specific aspects. Film merges the acoustical element with the visual; measured drawings such as sections capture multiple aspects affecting the spatial quality; and physical models allow the spatial distinctions offered by binocular vision and the possibility of showing material and tectonic considerations.

The desire to anticipate the experiential qualities of an imagined building or landscape is certainly not limited to computational media, but this chapter will investigate how software lets us anticipate aspects of architectural experience and decide what kinds of choice we can make. I will consider several techniques for working with and understanding the movement of people using the computer, asking in each case what the underlying relationship is between representation and the reality it aims to communicate. Software is primarily about enabling communication – taking something inchoate and making it clear – and I will consider various attempts at designing architectural software that is intended to fulfil this desire for clarity.

The representation of movement provides a demanding test for designers of software, for it comes with in-built presuppositions about the world, about what is important and what can safely be ignored. When choosing a type of software to understand movement, one is obliged to make choices about how the complex reality of experience can be reduced to something quantitatively described and modelled. Most users are unaware of the epistemological predispositions of their digital tools, which are not often exposed to view, for software usually works as a ‘black box’, that is a closed system that can be described without recourse to an understanding of its inner workings. So long as one understands what is acceptable as input, the software will produce an acceptable output. The black-box approach has benefits in terms of usability, and, as long as the output conforms to one’s expectations, there is no need to tinker with settings or customize the behaviour of the model. Only when the model fails to produce meaningful results or to meet necessary expectations must one open up its inner workings, exposing both the presuppositions of the software and the understanding of reality on which it depends.

Marshall McLuhan observed that new media tend to adopt the content of those they have replaced, and much design software adopts the metaphors and content of pre-digital forms of representation.1 It is possible, for example, for a designer to use Photoshop without understanding image-compression algorithms, and many of the tools provided, such as dodging and burning, employ metaphors taken from the photographic darkroom. Similarly, early word-processing software mimicked closely the experience of writing on a typewriter, including, of course, the qwerty keyboard designed to avoid mechanical clashes, and computer-aided design (CAD) software initially imitated the process of 2D drafting with a mechanical drawing instrument, such as the drafting arm, mayline or T-square. At first glance, such software appears to serve the simple purpose of translating content from one medium to another, providing a more efficient means of representing and storing the same information. Early adopters of CAD software could hardly have imagined the profound impact their decision would have on every aspect of design, but the limitations of early CAD software have been summarized by John Frazer, pioneer in design computing:

Probably with all the best intentions the computer graphics companies foisted computer-aided design on to a gullible architectural profession . . . The drawings produced were slower and uglier and derived by a most unpleasant and seemingly unnatural process in front of a frustrating screen. The therapeutic pleasure of the manual drawing board process with time to reflect upon what was being produced was lost. The CAD industry had ‘solved’ the wrong problem or at best had bodged a response to a badly stated problem or possibly ‘solved’ a problem which was not a problem at all.2

The introduction of CAD changed, not only the manual activity of drawing a line, but also the understanding of the relation between the act of drawing and the thing drawn (in this case, the imagined building or landscape). CAD shows up the close relation of design process to design outcome, revealing how a tool intended to smoothly move design information from an analogue to digital format had the unintended consequence of transforming the entire design process.

Whereas early CAD software attempted to recreate the experience of pre-digital ways of working, more recent architectural software involved different metaphors. The conventions of orthographic 2D drafting in architecture and engineering were incrementally replaced by techniques of design based on 3D models. The screen still required the flattening of the 3D representation on to a 2D surface, but the process of translation was concealed in the software, rather than explicitly addressed in the making of the drawing. Unlike the physical process of drawing a perspective or an orthographic projection, in which the representation is ‘constructed’ following a set of standard procedures for ‘flattening’ 3D objects on a 2D plane, 3D modelling software presents a (usually) perspectival view of the model that appears more ‘realistic’ than orthographic projection, but obscures the conventions of projection underlying it. Like the drawing, the digital model is an abstraction that allows one to work on the imagined project, but, unlike the drawing, the software collapses the distinction between drawing and building, making a highly abstract application of projective geometry appear as a direct manipulation of the thing itself. The fact that 3D modelling is in itself a form of projection based on the principles of descriptive geometry remains fully concealed in most software used to produce digital models, for they focus instead on intuitive operations that reproduce physical modelling techniques, such as carving, lofting and mirroring.

This sense of working directly on the building was enhanced by moving beyond the keyboard as interface. The invention of the computer mouse in the early 1960s3 was based on the idea that digital information could be manipulated and handled in the same way as we hold and move physical objects in space. Ever since, the way we interact with digital information on the screen has been based largely on spatial metaphors, and our understanding of digital models assumes that the screen is a window through which we can view and directly manipulate information that is inherently spatial.

Beyond drawing: numerical simulation

Although 3D modelling has had a profound impact on the design process by allowing architects to work on a realistic visual representation, the uses of the 3D model to produce and manage data promise an even more significant impact on the way that buildings are conceived. Early CAD software was unable to evaluate the performance of the future building or provide a foundation for evidence-based design decisions, but the integration of numerical simulation in the design process now allows designers to anticipate many aspects of the behaviour and experiential qualities of the building, in advance of construction. Numerical simulation is another form of representation, a way of communicating essential aspects of the design. With some simplification and addition of specific information, the digital model can be used as the basis to evaluate the lighting, acoustical, structural and thermal performance of a building, providing the basis for performance-based decision-making and communication of numerical results to clients in a way that can be intuitively understood. The integration of simulation and virtual reality has also been used effectively as the basis of communicating experience, for example with the Arup SoundLab, where the acoustical qualities of a proposed space are presented audibly to communicate numerical analysis to a non-specialist audience.4

Numerical simulation depends on the addition of data to the 3D model, specifying details relevant to the specific analysis. For example, before performing an evaluation of thermal performance, it is necessary to add data on materials and construction types, as well as assumptions about the types and scheduling of building services, augmenting the 3D geometry with additional metadata to enable a numerical representation of the building’s thermal behaviour. Just as the abstraction of 3D geometry can serve as the basis for a realistic visual representation, so it can also provide the means to approximate aspects of physical performance. Taken together, the multiple functionality of 3D geometry as a support for design decision-making and communication is quite unprecedented. Whereas pre-digital design depended on many different types of representation, such as plans, sections, physical models and graphical ‘simulations’ of views or material and tectonic aspects of the design, the 3D model now provides a single representation that can be used to produce multiple outputs, depending on the type of analysis needed. One such type of analysis is the study of movement.

The animated fly-through is a tool provided by most 3D-modelling software packages for simulating the experience of moving through a 3D model. The technique consists of animating a virtual camera along a linear path that approximates the point of view of a person moving through the space. As the name suggests, it is difficult to recreate the experience of walking with this technique, and the result is more often that of flying, a disembodied eye. Fly-throughs are challenged by an ambiguity of scale that one often finds in computer renderings, leaving an uncertain relation of structure to human inhabitation that makes it difficult to understand and imaginatively recreate the proposed experience. In some cases, this is simply the result of unfamiliar geometry:

Whereas the eye has been trained since the Renaissance to appreciate the length, width and depth of rectangular shapes, it is far less accustomed to evaluate the size of smoothly warped surfaces and volumes. Hence the impression that realizations like Asymptote’s HydraPier in Haarlemmermeer, Netherlands, or Zaha Hadid’s BMW Central Building in Leipzig, Germany, are like momentarily landed futuristic spaceships. With spaceships they share a streamlined appearance, and above all an enigmatic scale that appears as a consequence of their non-conventional geometry.5

The ambiguity of scale described by Picon is universal enough in recent avant-garde computational architecture to represent the result of a working method and not merely a stylistic choice. It is, in large part, an outcome of the software and its capabilities.

The simulation of human movement is an area of study in itself, focused for example on anticipating the patterns of movement of pedestrians, or the time required for evacuation in case of fire. Such simulations are based on assumptions about aspects of the building that affect pedestrians’ decisions about the best route, constructing a model that includes only these selected criteria. Space Syntax is one well-established method for simulating the influence of the urban environment on pedestrian behaviour, based on a numerical abstraction that describes the degree of connectedness of each zone within an urban context.6 This methodology has been questioned as overly reductive, because it discards information such as building height and land use that could reasonably be expected to have an impact on the choices made by individual pedestrians.7 Importantly for this discussion, the underlying logic on which the results are based, and the accuracy with which the model can be applied, are very difficult for the non-expert to evaluate critically. The communication of underlying presuppositions and assumptions is a central challenge for evidence-based design, and, with simulations, these assumptions tend to be concealed by the software or remain too complex to present to a non-specialist audience. This can be a problem if critical decisions about urban form or landscape or building design need to be made on the basis of such evidence, as they often are.8

Black box versus open design

All this suggests that the way in which the 3D model has been implemented in commercial design software is problematic because it is self-focused, providing insufficient opportunities to test ideas against other types of design representation. The idea behind building information modelling (BIM), the current incarnation of CAD software, is the translation of a data model into a working method, by which the model and its associated metadata become the primary repository of design information and tool for design communication. The intention is that, from the single model, multiple types of representation can be generated to extract different types of information for different audiences. We might question whether such an approach to the design process will lead to greater clarity of understanding and communication, or whether, like CAD in John Frazer’s estimation, BIM might turn out to be another bodged response to a badly stated problem. This will depend very much on how BIM software is presented to its users. It could become an inscrutable black box that somehow delivers predictable results only so long as the designer consistently stores all work and ideas and project information in one, monolithic and total, digital representation. Alternatively, the building information model could become an opportunity to creatively rethink the design process and the relation of software to the diverse set of tools that architects and designers of the built environment have always used to understand and communicate the future building. In my opinion, recent history suggests that only the latter option will provide a fluid enough process to capture the richness and nuance implicit in an experience such as moving through a building or landscape. I will now briefly outline what I think this future could look like. In contrast to a method of working that remains enclosed in the abstractions of the 3D model, I would like to suggest what a more ‘open’ design process would look like, and how this could be enabled by software. For the purpose of the argument, I will refer to the opposite of the black box as ‘open design’.9

To start with, we need an approach that recognizes the limitations of each individual software package. This implies a large ecosystem of software, in which each individual package is carefully designed to accomplish its particular task, as well as allowing a high level of interoperability between packages. It implies an understanding of the design process and of the flow of information between software, physical models, material and tectonic investigations, and other forms of design exploration. When software fails, as it inevitably does, there must be sufficient understanding of the underlying mechanics to repair and/or re-invent one’s digital tools to meet the questions at hand. In the case of simulation, this approach could involve methods that permit the designer to test simulation outcomes by running multiple simulations to identify the influence of a given parameter or group of parameters, a method known as sensitivity analysis.10 Software becomes a tool to test multiple ‘what if’ scenarios, as well as testing the accuracy of results.

An open design approach also implies a detailed understanding of how diverse compu -tational methods interact with each other and with analogue methods over the course of the design process. The computational design process offers a means to track and understand the stages in design through tacit and explicit logging of design activity and communication. Figure 4.6.1 presents a computer-generated visualization of a design process informed by simulation, which provides insights into the range of design responses to building thermal performance data in a parametric design process.11 This image, presenting changes to a single script over about 24 hours, illustrates the chronological relationship between simulation feedback, modification to building geometry and modification to parametric definitions, providing a detailed understanding of specific aspects of a design process. Although this representation was developed as a research tool, similar visual representations of process have been used to inform designers about the history and authorship of their own designs,12 and to engage a wider audience in a participatory design process.13

It is also arguably the case that an open design process should include the ability to write some software yourself. Acquiring the programming skills required to contribute to open-source projects is outside the interests and skillset of most designers, but the ability to craft simple scripts to accomplish specific design tasks is an ability that any designer can master. In a design culture that increasingly favours the computer as the primary means of design development, writing one’s own bespoke software is an act of creative freedom that supports a critical understanding of software and its relation to various forms of architectural practice.

Figure 4.6.1

Figure 4.6.1 Detailed analysis of parametric design development with simulation feedback. The visualization presents a record of distinct versions of a design produced during a 24-hour period. The design was produced using scripts: text-based commands for defining geometry, metadata related to simulation and parametric relationships. Script versions are arrayed along concentric circles representing named scripts

Source: Julien Nembrini and Mark Meagher 2013

Mark Burry has called for a hybrid design process, incorporating material investigations and full-scale 1:1 models, alongside digital investigations that he calls ‘post-digital’.14 The mixing of digital models with large-scale physical studies overcomes some of the limitations of the 3D model, such as ambiguity of scale and absence of methods for testing material and tectonic implications of formal decisions made in the software. The construction of prototypes at a range of scales can also be helpful as a means of validating simulations of structural performance15 (Figure 4.6.2). Physical models provide an insight into multiple aspects of performance and experience that cannot readily be tested in the digital model, aspects that go beyond purely formal concerns to address a wider range of sensory qualities. These experi ments involve a design process including tactile engagement with materials alongside a sophisticated approach to simulation through generation of digital models.16 This is the approach taken by the ‘persist -ent models’ presented by Phil Ayres in Chapter 4.7 of this book; it is also the approach taken in research laboratories such as the Media and Design Lab of Jeffrey Huang,17 at the Ecole Polytechnique Fédérale de Lausanne.

Figure 4.6.2

Figure 4.6.2 Timber fabric module prototype

Source: Markus Hudert, IBOIS Laboratory for Timber Constructions 2013

With software of hidden workings, referred to earlier as the black box, it is all too easy to produce and analyse results (geometry and associated data), but very difficult to evaluate one’s results meaningfully in terms of experience, construction or other complex and nuanced aspects. Achieving clarity of communication requires an understanding of the software, including its limitations, and some involvement in assembling the process of design. The experience of movement is particularly resistant to reductive approaches. Being inherently complex, it does not fit well within the context of a totally digital approach. Its successful resolution requires intuition and skill. It benefits from a hybrid approach, from an ‘open’ design process.

Notes

1 McLuhan 1964.

2 J.H. Frazer, ‘The generation of virtual prototypes for performance optimization’, in Oosterhuis and Feireiss 2006, pp. 208–12.

3 W.K. English, D.C. Engelbart and M.L. Berman (1967) Display-selection techniques for text manipulation, IEEE Transactions on Human Factors in Electronics, vol. HFE-8, no. 1, pp. 5–15.

4 Loukissas 2012, p. 47.

5 Picon 2010, p. 125.

6 A. Turner, A. Penn and B. Hillier (2005) An algorithmic definition of the axial map, Environment and Planning B: Planning and Design, vol. 32, no. 3, pp. 425–44.

7 C. Ratti (2004) Urban texture and space syntax: Some inconsistencies, Environment and Planning B: Planning and Design, vol. 31, no. 4, pp. 487–99.

8 This discussion of simulation is based in part on conversations with Philip Langley, whose presentation, ‘Branch, merge, commit – New forms of open source for designing with BIM’, addresses this topic. See ‘The Bartlett Conference: Pedagogy meets Big Data and BIM’ (June 2013). Available at: www.bartlett.ucl.ac.uk/bartlett/latest/events/bartlett-conference (accessed 19 August 2014).

9 The term ‘open design’ was first used by this author in the context of a research project at Harvard University called ‘OpenD’, which involved the visual representation of a design process. See M. Meagher, K. Bielaczyc and J. Huang (2005) OpenD: Supporting parallel development of digital designs, Proc. Designing User Experience (DUX ‘05); and J. Huang, K. Bielaczyc and M. Meagher (2006) Liquid ontologies, metaperspectives, and dynamic viewing of shared knowledge, Proceedings of I-KNOW 2006 6th International Conference on Knowledge Management, pp. 560–7.

10 J. Nembrini, S. Samberger, A. Sternitzke and G. LaBelle (2012) Combining sensitivity analysis with parametric modeling to inform early design, Proceedings of 2012 Symposium on Simulation for Architectural and Urban Design.

11 J. Nembrini, M. Meagher and A. Park (2013) Usage patterns of scripting interfaces for building performance assessment at early design stage, Proceedings of 13th International Conference of the International Building Performance Simulation Association.

12 U. Hirschberg (2003) Transparency in information architecture: Enabling large scale creative collaboration in architectural education over the internet, International Journal of Architectural Computing, vol. 1, no. 1, pp. 12–22.

13 K. Al-Kodmany (1999) Using visualization techniques for enhancing public participation in planning and design: Process, implementation, and evaluation, Landscape and Urban Planning, vol. 45, no. 1, pp. 37–45.

14 M. Burry (2005) Homo Faber, Architectural Design, vol. 75, no. 4, pp. 30–7.

15 Y. Weinand and M. Hudert (2010) Timberfabric: Applying textile principles on a building scale, Architectural Design, vol. 80, no. 4, pp. 102–7.

16 Examples of such a hybrid approach include Burry’s own design studios at RMIT; research laboratories such as Jeffrey Huang’s Media and Design Lab and Yves Weinand’s IBOIS (Laboratory for Timber Constructions) (both at the Ecole Polytechnique Fédérale de Lausanne); and the Centre for Information Technology and Architecture, where the prototypes presented by Phil Ayres in Chapter 4.7 of this book were developed.

17 See Brizzi and Giaconia 2009, pp. 102–4, and N. Zuelzke, T. Patt and J. Huang (2012) Computation as an ideological practice, Digital Aptitudes, Proceedings of ACSA, Boston, MA, 2012.