Ideas on how to integrate computers with the design process are nearly as old as practical electronic computers. Two articles from 1956—G.R. Price's piece "How to speed up invention" for Fortune Magazine (at the time publishing serious long-form pieces) and D.T. Ross's conference paper "Gestalt programming: A new concept in automatic programming"—conceived of an asymmetric conversational process between designers and machines, in which the machine carries out tedious calculations involving material properties, while the designer makes high-level design decisions. In the terminology of Donald Schön's influential view of design as a reflective conversation with the design situation, this is in retrospect a proposal for the machine to participate in the design conversation by increasing the "backtalk" of a situation, crunching numbers to illuminate current constraints and implications.
The first serious requirements analysis for a design-support system (chronicled in J. Francis Reintjes's excellent history, Numerical Control), determined that it should have a graphical input method that would allow designers to make and modify sketches; the system would both display refined sketches back to the designer, and simultaneously convert them into internal representations on which automated numerical analyses (such as stress analysis) could be performed. The result would be a system that should function in two roles: "at some times, it would be the designer's slave, at others it would alert the designer to impossible requirements or constraints being imposed" (p. 96). This dual view of a backend automated reasoning system coupled with a front-end interactive modeling tool has gone through a series of evolutions, with parts variously emphasized or de-emphasized.
Early work on backend reasoning showed that designers were willing to try out more modifications when automated stress analysis was available, and also began exploring giving designers computerized parts catalogs, both so they could simulate the physical properties of a known part, and quickly retrieve parts with specific desired properties.
Ivan Sutherland's now-famous Sketchpad system provided a front end, with a light pen and real-time graphics display that were at the time quite novel. He showed some advantages to computer sketching over paper sketching, such as being able to precisely draw diagrams with large numbers of components (especially repetitive ones). He nonetheless concluded, "it is only worthwhile to make drawings on the computer if you get something more out of the drawing than just a drawing", so rather than positioning Sketchpad primarily as a computer drawing tool, he positioned it as a "man-machine graphical communication system", with sketching as the input method by which a designer communicated design information to the backend reasoning systems. To that end, it supported semantic annotations about the meanings of lines in the sketch and their relationships to each other, allowing, for example, force-distribution analysis on a sketch of a truss bridge, or simulation of sketches of electronic circuits (see pp. 137-138 of his thesis, which continues the conversational metaphor with the subtitle, A Man-Machine Graphical Communication System).
In contrast, some later systems did see interactive modeling and production of design diagrams as a major use of CAD (see, e.g., David Weisberg's recounting of the first commercial CAD system [pdf]). An influential line of work in that direction developed [pdf] a set of graphically editable three-dimensional surface primitives that could be combined to produce arbitrary shapes.
A conceptual landmark on the "backend" side came in 1969 with Herb Simon's Sciences of the Artificial, which both developed a theory of design as a problem-solving activity, and connected it to concepts in the burgeoning artificial-intelligence literature.
By the 1970s, work had multiplied to such an extent, especially in the architecture community, that there existed a whole range of design tools, assistants, critiquers, automators, each varying in both technical properties as well as in ideas about how they should interact in and perhaps change the design process. One cluster of work was spurred by the 1971 introduction of shape grammars into architecture by George Stiny and James Gips, which tied in CAD to the generative-content work that had already begun flourishing in areas like procedural art. By 1977, there was enough work for Nigel Cross to write a book-length survey taking stock of the field, The Automated Architect, consisting in part of a survey of the many extant systems, and in part of a deeper analysis of what automation was supposed to bring to architecture, both technically and socially. A blurb in 1977 billed it as "an anticipatory assessment of the impact of computer-aided design systems... written from a social and human orientation, in contrast to the machine orientation of almost all other work in the field", marking somewhat of a turn away from a pure engineering orientation (Cross would, 30 years later, return to an analysis of what design research is exactly, concluding that it's a third field, neither pure science nor humanities, in his quite good book Designerly Ways of Knowing).
A subsequent wave of systems, coinciding with a stronger shift from the engineering to the design community, took design-support systems in several different directions, mostly focusing on the importance of knowledge in specific design domains.
In many domains, vocabularies and representations had evolved over significant periods of time to encode useful ways of thinking about problems. Starting in the early 1980s, Bryan Lawson and his collaborators criticized the use of a generic set of geometric surfaces as the representation for all architectural design problems, arguing that doing so lost domain-specific knowledge, at worst even encouraging a design style that leads to visually impressive but poor designs, akin to using a lot of fonts and visual effects in desktop-publishing software (a retrospective appears in his 2002 Leonardo article, "CAD and creativity: does the computer really help?"). Even when CAD didn't have outright negative effects, Lawson argues, the focus on visual modeling led to CAD failing to fulfill its original vision as a design assistant, and instead serving a narrower role as computerized draughtsman. His own early attempt (1982) to improve that situation built a domain-specific tool for roof design, replacing the generic geometric primitives with a traditional architects' vocabulary of ridges, verges, valleys, eaves, hips, and so on—representations that bring relevant design questions to the fore, such as the relationship between structural support and space enclosure, and interior and exterior surfaces. (Lawson much later expanded on this theory of space in architecture in the 2001 book Language of Space.)
The development of knowledge-based AI systems in the 1980s provided an opportunity to bring automated reasoning to more semantic representations (rather than numerical simulations like stress analysis, or purely structural inferences). For example, if a building were designed using terminology from municipal codes (windows, floors, hallways, etc.), and the municipal codes themselves were encoded in a design-support system, the system could determine which parts of the fire code applied, and whether a design met them. John Gero surveyed [pdf] this knowledge-based approach to CAD in 1986.
In the late 1980s and early 90s, Gerhard Fischer introduced the idea of domain-oriented design environments (DODEs), to combine and extend several of these approaches. DODEs start with building blocks meaningful in a particular domain (e.g. sinks, counters, ovens, and windows for kitchen design) and allow designers to compose them into higher-level representations. They extend the idea of domain-specific knowledge to include not only factual knowledge (such as structural soundness or building codes), but also design knowledge, such as best practices and common solutions. This knowledge can be employed to do things like critique a proposed design (the sink isn't in front of a window), or to provide design suggestions and the reasons for them (the sink should be placed near the range, due to common workflow), shifting the computer's role in the design conversation from providing backtalk to actively participating on the design side as well.
In addition to domain-specific environments, the 80s also saw, in echoes of Simon's approach, considerable renewed effort in tying design and artificial intelligence concepts more closely, e.g. by investigating the relationship between "design spaces" and "search spaces", and the role of techniques like heuristic search and numerical optization in design problem-solving. A 1991 paper on "Searching for designs" [pdf] by Robert Woodbury nicely surveys both the conceptual and practical angles, as of that time.
Knowledge-based systems run into the problem that few domains are well-defined and static enough to effectively capture domain knowledge in a tool that can be built and deployed to users, leading to the necessity of open systems that can be evolved and extended. Applying that principle to DODEs, they've been extended to support designers evolving (and sharing amongst each other) their representations and design knowledge, which has developed into a concept of "metadesign" systems [pdf] that support the designer not only in evolving their design within a specific design domain, but also in the process of specifying and evolving the design domains themselves.
A different line of work at around the same time argued that CAD had fundamentally erred in being based around graphically interacting with a drawing—that of the two things that can be found in any design office, namely conversation and drawings, the conversation was where the design took place, with the drawings being secondary, and mainly representing the end result of design. In particular, Lawson and Loke argued that, early in the design process, there is rarely a single design in progress for which there could be a drawing, but instead many, often disconnected, bits and pieces of design goals, tentative conclusions, design decisions, and ideas being pursued in parallel. Although admitting that drawing does play a role in this process, their work instead built a prototype system that converses textually with the designer, learning about his or her design goals, bringing them up later as reminders, making suggestions, critiquing ideas, answering questions, and so on—a pretty solid return to the original conversation-with-machines metaphor.
This is an admittedly incomplete, and somewhat architecture- and AI-biased, history of the field of automated design and design assistance. Hopefully it provides some useful pointers into the voluminous—and sadly often overlooked—literature, from which I think we can still learn quite a bit (working in areas like videogame procedural-content generation myself, I'm often impressed to find how fresh-seeming work from as far back as the 1970s on architecture generation is).