Design Tooling - Design Machines
|
![](../images/spacer.png) |
|
|
Return to front page
|
|
|
Information
|
Introduction
This module engages the design of "things that design"
and explores the relationship between designers (or design mechanisms)
and designs. Design is often discussed as a multi-faceted process.
As such, design tools have often been developed to address isolated
issues in design (e.g. analysis, generation, fabrication, evaluation).
All of these systems segment the design process into isolated parts
and consign coherent design integration to the intuitive devices
of human designers. While this is often desirable, it is also worthwhile
to explore completely autonomous design systems which perceive and
act independently of human intervention. In this class of systems,
the intuitive gaps must be explicitly filled and the integration
of related issues must be explicitly resolved. Students will be
provoked to remove themselves from direct design and focus on stating
direct connections between perception and design outcome. Students
will be shown how to develop simple autonomous design machines.
These machines will attempt to mimic the behaviors of living creatures
which build nests, hives, and mounds in complex physical environments.
|
|
|
Background
|
The 1981 paper "Design Machines" by
George Stiny and L. March outlined the development of an autonomous
system for creating designs. Their framework divides the design
process into four mechanisms: Receptor, Effector, Language of Designs,
and Design Theory. The receptor creates representations of external
conditions and the effector stimulates external processes or artifacts
from designs. According to Stiny and March, Receptors can be sensors
or traditional input devices like a keyboard or a mouse. Effectors
might be Printers, CNC machines, and even Robots. Together the receptor
and effector determine the design context. The Language of Designs
gives an account of the formal parameters of designs. Finally, the
Design Theory describes the correspondence between the Language
and the Context.
|
|
![](images/weaverbirds.png) |
|
Weaver Bird
|
Implimentation
|
|
|
|
Perception
|
Introduction
The following section outlines a few simple techniques to get students
started on perceiving and interpreting an environment through a
computational lens. According to Stiny and March, perception is
the responsibility of the receptor mechanism. The receptor and effector,
taken together make up the design context or the interface between
the design machine and the outside world.
|
|
|
Images
|
Image Types:
Camera Images: Produced using a lens and
a light sensitive receptor.
Transmission Images: Produced when light
is shone through an object. (ex. X-ray)
Sonic Images: Reflection of Sound waves
off an object. (ex. Medical Ultrsound)
Radar Images: Tradiitonal Radar screen.
Pressure Maps: Information from a grid of
pressure sensors.
Range Images: Matrix of distances to different
objects in a space. (ex. Sonar)
Pixel-Based Approach
2D Scaned images and digital photographs can be
used to import information about context into a computational platform.
However, images can be produced with other means. These imported
images become digital objects when they are translated into a grid
of colored pixels. All image recognition and manipulation algorithms
rely on procedures which examine and manipulate images at a pixel
level.
Getting Started: A good place to start is
with operations on Bi-Level Images. Images containing only
black pixels and white pixels are the easiest to interpret and manipulate.
Thresholding is a means of converting a gray level or color image
to a BI-level image. One of the ways to define objects in an image
is through connectivity. The 'seed' method can identify a bounded
object from a single pixel seed. The algorithm checks the neighbors
of the seed and the neighbors' neighbors until it has found all
the boundaries of the object. Once objects are defined in
a image they can be interrogated through geometric operations such
as area, perimeter, etc. Through a method called erosion,
"extra" pixels can be removed from an image in order to
produce cleaner images in which objects can be picked out more readily.
Reference: J.R. Parker, Practical Computer
Vision Using C. John Wiley & Sons, New York 1949.
Source Code
Shape-Based Approach: Shape Grammars, developed
by G.Sting and J.Gips, stand as a critique of the vast majority
of computational vision systems. Shape Grammars can be aligned with
gesalt theories of perception in which compositions are more complex
than the sum of their consitituent parts. According to Shape Grammarians,
images are not composed of pixels. In fact the structure/complexity
of an image is a function of the operations performed on it. Apart
from its relience on shape, Shape Grammars does not adhere to any
"fixed" means of decompositing images.
Refernce: http://www.shapegrammar.org
(*) Topics for Discussion
Many theorists, including Stiny, have argued that
much of creative behavior is embedded in perception. "New"
A.I. also claims that perception is inseparable from reasoning.
Can designers benefit from discussing and implementing "ways
of seeing" explicitly? How might new means of perception enabled
by technology change the way that architects see their own role
as designers of environments?
(**)Assignment
Design a mode of perception which embodies desirable
biases. Use this perceptive mechanism to interpret various artifacts
and environments.
|
|
|
![](images/star.png) |
![](images/seed.png) |
![](images/thumbtacks.png) |
![](images/pressuremap.png) |
![](images/x-ray.png) |
![](images/sonar.png) |
Shape Grammars
How many ways can you decompose this shape?
|
Pixelization
Building up images from descrete units
|
Camera Image
|
Radar Image
|
Transmission Image
|
Range Image
|
|
|
Video
|
Computer Vision using a WebCam by Josh Nimoy et
al.
"What is Computer vision? Realtime video input
digitally computed such that intelligent assertions can be made
by interactive systems about people and things. Popular techniques
in new media arts and sciences include the ability to detect movement
and presence in spaces, appearance of objects or people, how many
of them are there, which way it's facing, and edge path vectors.
Myron brings computer vision to a growing number of interactive
media development platforms, allowing cameras connected to your
computer to control just about anything. This software aims to make
computer vision easy and inexpensive for the people! Currently,
it has more "tracking" functionality than other plugins
with similar aim. "
References: http://webcamxtra.sourceforge.net/
|
|
|
![](images/video_simple.png) |
![](images/video_pixel1.png) |
![](images/video_pixel2.png) |
![](images/video_vectors.png) |
![](images/video_track.png) |
|
Web Cam Still
|
Pixelized
|
Bubblized
|
Vector Trace
|
Tracking
|
|
|
|
Microworlds
|
Introduction
Microworlds offer constrained environments whose very nature helps
users to develop an understanding of complex causal relationships.
The goal of microworlds is, simply stated, "to develop new
external systems of representation that foster more effective learning
and problem solving." (Goldin 1991) Theories of Piagetian learning
have been one of the main influences in the development of microworlds.
Piaget's theory of constructivism asserts that people construct
knowledge about the world through experience. The first developers
of microworlds held that one's construction of knowledge is aided
by the process of making external or shareable artifacts. Microworlds
can help architects learn about the implicit logic of organizational
and material systems in buildings by constructing the logics of
those systems explicitly.
Educational Examples
Logo (Papert), the Sim Series (SimCity, SimLife, The Sims), and
StarLogo (Resnick) are computer programs that help people make sense
of the world by making things in the computer. Logo was developed
at MIT by Seymour Papert, a well known mathematician and advocate
for computers in education. Logo is a virtual environment intended
to teach children mathematics by allowing them to draw pictures
through numerical instructions to a virtual drawing agent, the turtle.
Children playing within the world of Logo learn to understand the
nature of geometrical figures. (ex. a square has four equal sides
and four equal angles) StarLogo is an elementary programming language
designed by Mitchell Resnick for people with no programming experience.
"With StarLogo, people can write rules for thousands of graphic
creatures on the computer screen, then observe the group-level behaviors
that emerge from the interactions."(Resnick 1994)
References:
Edwards, L. Microworlds as Representations.
Goldin, G. (1991) The IGPME working group on representations.
In F.Furinghetti (ed) Proceedings of the XV Conference of the International
Group for the Psychology of Mathematics Education vol. 1, p. xxvii:
Assisi, Italy.
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful
Ideas. New York: Basic Books.
Resnick, M. (1994) "Learning About Life." Artificial
Life, vol. 1, no. 1-2.
|
|
|
Cellular Automata
|
Developed by John von Neumann(1903-1957). "Cellular
automata are discrete dynamical systems whose behavior is completely
specified in terms of a local relation. A cellular automaton can
be thought of as a stylized universe. Space is represented by a
uniform grid, with each cell containing a few bits of data; time
advances in discrete steps and the laws of the "universe"
are expressed in, say, a small lookup table, through which at each
step each cell computes its new state from that of its close neighbors.
Thus, the system's laws are local and uniform." (Brunel University
Artificial Intelligence Site)
Example by Simon Greenwold
http://www.architecture.yale.edu/872a/processingExamples/CA3D_Template/index.html
Suggested Assignments
1) Design a rule set for the Cellular Automata which deals provocatively
with figure-ground relationships.
2) Construct a critique of the voxel approach to designing in 3D.
|
|
|
![](images/3dautomata.png) |
![](images/cagrid.png) |
![](images/3dautomata2.png) |
|
|
|
Simon Greenwold 2003
|
|
|
|
Braitenburg Vehicles
|
Introduction
Braitenburg Vehicles* demonstrate how organization can emerge
out of the interaction of a set of simple machines operating without
centralized control. These examples build on code originally written
by Simon
Greenwold. Braitenburg vehicles are implemented here
as computational systems in which simulated sensors and motors are
linked to produce behavior. These vehicles have left and right light
sensors and left and right motors. All these mechanisms are defined
independently but they can be linked in different ways to imbue
vehicles with different behaviors. Links between sensors and motors
can follow one of three underlying schemes:
(1) left sensor to left motor / right sensor to
right motor
(2) left sensor to right motor / right sensor to left motor
(3) both sensors to both motors
Vehicles respond to the environment
In this example the mechanisms are wired according to scheme (2).
The speed of each motor is directly proportional to the amount of
light detected by its corresponding sensor. In the case of scheme
(2) crossed sensor/motor links cause the vehicles to turn toward
light spots. The vehicles have been placed on a black and white
photograph depicting some diffuse shadows. As the vehicles traverse
the 2D space of the photograph, they appear to be attracted to or
to flee from different features.
*Braitenburg, Valentino. Vehicles. Experiments in Synthetic Psychology.
MIT Press, Cambridge 1986
|
|
|
|
Sticky Environments
This is a 2d example in which the imagemap is populated by small
bright "sticky" elements which can attach to vehicles
and other elements within certain proximity. Over time, elements
are rearranged by these interactions. The vehicles also leave color
trails as they move. These trails can be reinforced by the trails
of other vehicles. Both the trails left by the vehicles and the
arrangement of sticky elements change the feature space of the imagemap.
This creates a feedback loop, influencing the future activity of
vehicles.
|
|
|
![](images/shadowmap.png) |
![](images/vehicle.png) |
![](images/vehicles_cluster.png) |
![](images/vehicle2.png) |
![](images/vehicles_trails.png) |
|
Yanni Loukissas 2004
|
Anatomy of a Braitenburg Vehicle
|
|
|
|
2D Vehicles, 3D Trails
The intent behind this example is to develop an interface for students
to explore design machines in a 3D environment. The 3D vehicles
in this example can be used to simulate environmental forces or
to search for emergent organizational phenomena. In the applet below,
Braitenburg vehicles move over a 2d imagemap collecting information
about light and dark spots. This information is used to construct
forms in 3d. In the more advanced examples. Information from the
3d form is projected back onto the source imagemap. This is example
is a basic extrusion of the standard 2d Braitenburg
model according to brightness levels.
|
|
|
![](images/3dvehicles/3dAnts4.png) |
![](images/vehicle.png) |
![](images/3dvehicles/3dAnts4a.png) |
![](images/3dvehicles/3dAnts5.png) |
|
|
Yanni Loukissas 2004
|
Preliminary Controls: Use "Shift" , "x"
and "z" with the mouse to pan, zoom and rotate.
|
|
|
|
Shadow Constructor
The vehicles in this example builds surfaces instead of trails.
They are still guided by a source imagemap. The constructed surfaces
cast shaddows on the imagemap. This results in a feedback loop which
augments the behavior of vehicles.
|
|
|
![](images/3dvehicles/3dAnts15b.png) |
![](images/vehicle_shadow.png) |
![](images/3dvehicles/3dAnts13.png) |
![](images/3dvehicles/3dAnts14.png) |
|
|
Yanni Loukissas 2004
|
Preliminary Controls: Hold down "d" to
draw. Press "q" to start the vehicles. Use "Shift"
, "x" and "z" with the mouse to pan, zoom and
rotate.
|
|
|
|
Advanced Shadow Constuctor
This example builds on the previous two and add the ability to have
multiple imagemaps and more control over shadows.Users have the
option to select one or more overlaping imagemaps as a starting
point. Vehicles can read many overlaid maps. Users can also manually
place light and dark sections on the imagemap. This is just a starting
point for the exploration of multiple level microworld investigations
using light and shadow as motivating parameters. The next step is
to add a new class of vehicles which create "activity areas"
in which augment and respond to the distribution of light and shadow.
Design students playing with multi-level microworlds like this have
the opportunity to explore rule-based systems which are sensitive
to multiple dimensions of contextual information.
|
Version1
|
Version2
|
![](images/3dvehicles/3dAnts15a.png) |
![](images/3dvehicles/3dAnts_life.png) |
![](images/vehicle_shadow.png) |
![](images/3dvehicles/3dAnts2.png) |
![](images/3dvehicles/3dAnts17.png) |
|
Yanni Loukissas 2004
|
Preliminary Controls: Hold down "d" to
draw. Press "q" to start the vehicles. Use "Shift"
, "x" and "z" with the mouse to pan, zoom and
rotate.
|
|
|
|
Topics for Discussion
By working with decentralized computational systems, students of
architecture can explore how designs might be developed in a "bottom
up" fashion. Students might also come to understand the interdependence
of design behavior (perception and action) and context. With respect
3d, How does it change the stakes for the use of computation in
design? What additional kinds of investigations does a third dimension
allow?
Suggested Assignments
1) Design an environment to accommodate a programmed
vehicle (ex. painting with light)
2) Design a vehicle which will build a predetermined
structure in response to a given environment.
|
|
|
Design Games
|
In this educational scenario, design machines
will be situated as agents within the context of abstract design
games. This strategy builds on research conducted in the 80's at
the MIT department of architecture in which architectural design
was explored through the metaphor of a game (Habraken 1987). The
games developed at that time were entitled 'Concept Design Games.'
They attempted to highlight some of the vital characteristics of
the design activity by focusing on the interaction of designers.
This scenario proposes the development of systems which can exhibit
visual reasoning of the nature required to play 'Concept Design
Games.' The initial game to be explored using architecture design
machines is a variation of the silent game, developed by Habraken.
In the silent game, designer/players build elaborate visual compositions
through the use of patterns. This game highlights the implicit understandings
that develop between designers in making and projecting patterns
through visual compositions.
Topics for Discussion
As human designers/players attempt to anticipate
the behavior of the machine they become engaged in thinking about
how design happens. The interaction between human and machine designer/players
can be a controlled and informative opportunity for teachers and
students alike to reflect on intuitive and mechanistic approaches
to design.
Suggested Assignment
1) Play the silent game with a design machine and
discuss the results
2) Develop a new concept design game and program
a design machine to play it.
|
|
|
![](images/game.png) |
![](images/vehiclegame.png) |
![](images/cdg1.png) |
![](images/cdg2.png) |
![](images/cdg3.png) |
![](images/cdg4.png) |
Yanni Loukissas 2004
|
User intervention with the use of Braitenburg Vehicles.
Diagrams from Concept Design Games by John Habraken and Mark Gross.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Return to front page
|
|
|