Gamifying Evaluation of Creativity in Early Stages Of Conceptual Design

DS 86: Proceedings of The Fourth International Conference on Design Creativity,Georgia Institute of Technology, Atlanta, GA, USA

Year: 2016
Editor: Julie Linsey, Maria Yang, and Yukari Nagai
Author: Mahmoud, Dinar; Jami, Shah
Series: ICDC
ISBN: 978-1-904670-82-7

Abstract

Evaluation of creativity in design is often based on measuring the outcome of a design task in conceptual stages, i.e., problem formulation and ideation. A commonly used example is the method of Shah, Smith, & Vargas-Hernandez (2003). Methods that evaluate creativity during the design process and provide immediate feedback to the designer are rare. One approach that does not require finishing a single design task is to test designers’ skill set through a questionnaire or multiple design problems. Such tests are usually conducted on pen and paper, are time consuming, and still do not provide an instant feedback to designers. We propose a gamified interactive web-based testbed that not only informs users about the creativity in their conceptual design, but also encourages them to improve on their skills by showing them how they fare compared to other users. The gamified creativity evaluation environment is an extention of our interactive web-based testbed, the Problem Formulator (Maclellan, Langley, Shah, & Dinar, 2013). It was created to collect text fragments on description of design problems where users were asked to label the fragments in one of 6 predefined categories: requirement, use scenario, function, artifact, behaviour, and issue. Hierarchical structure within the categories and links among them facilitated substantial expressiveness of conceptual design as well as measurement of various characteristics, e.g., number of implicit requirements. On the other hand, a set of sub-skills that designers should posses in conceptual design were identified for the problem formulation stage (Dinar, Shah, & Todeti, 2015). The source of these skills was either the literature or results of our previous experiments that highlighted what formulation characteristics could be associated with a priori tests of creativity. The next step was to associate the identified skills with some quantitative measures. The measures, P-maps characteristics, were taken from the Problem Map ontological framework (Dinar, Danielescu, MacLellan, Shah, & Langley, 2015) which underlied the Problem Formulator testbed, i.e., its vocabulary and grammar. The association of skills with P-map measures and results from a group of students were reported in (Dinar, Park, & Shah, 2015). 2 4th ICDC Armed with a testbed for data collection and a method of quantifying skills from the collected data, the next task in gamification of creativity evaluation was to automate the measurement process to provide immediate feedback on entered fragemtns. To that end, inventories of responses were created, scores were assigned to each response, and combination of scores were determined for each measure.
Inventories emerged from aggregating all entries on different design problems from two groups of product design students in 2013 and 2014, and clustering the responses into semantically similar clusters. Two approaches were adopted. One was based on finding closest hypernyms in WordNet (Miller, 1995). The other was based on a commonly used semantic similarity measure (Han, Kashyap, Finin, Mayfield, & Weese, 2013). The extended interactive testbed shows users their scores and the best scores, as well as changes to the scores as users add new entries. The clustering methods used in creating the inventory can also be used in evaluating the responses. Another possibility is to predict creativity in ideation outcome for the given design problems. We have developed both classification and regression models of ideation metrics (Shah et al., 2003) as dependent variables and P-map characteristics (Dinar, Danielescu, MacLellan, et al., 2015) as independent variables, see (Dinar, Park, Shah, & Langley, 2015). Data collected on one group or one problem can be used as a training set, while a different group or problem as the test set determine model transfer across groups or problems. Future research can exploit shape abstraction algorithms (Orbay, Fu, & Kara, 2015) to associate concept sketches with textual responses in the function-idea inventory of a specific design problem, to complement the gamified conceptual design app with direct measures of ideation.

Keywords: gamification, evaluation, conceptual design

Please sign in to your account

This site uses cookies and other tracking technologies to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional and marketing efforts, and provide content from third parties. Privacy Policy.