Our task measures only a sliver of the complex process of creativity. We focus on verbal creativity and divergent thinking, the ability to generate diverse solutions to open-ended problems. In our studies, scores on the task correlate with various other measures of creativity, such as the ability to generate novel uses for common objects, find connections between distant concepts, or solve different kinds of verbal problems. A comprehensive measure of creativity would also test aspects such as the appropriateness of the responses or real-world creative achievement.
Once you see the distances between your words, it is easier to artificially inflate or deflate your score. In our studies, participants do not receive feedback about their score or word distances, and their scores are fairly consistent when they retake the task two weeks later.
Objectively measuring the relatedness of words is difficult, so as a proxy we look at how often the words are used together in similar contexts. We use the Common Crawl corpus, which contains thousands of different words across billions of webpages. Using an algorithm, we compute the distance (or relatedness) between the words; words such as “cat” and “dog” are often used close together and thus have smaller distances between them, while words such as “cat” and “book” would have greater distances. The total score is simply the average of these word distances: greater distances give a higher score.
We use a list of words from an English dictionary. Some words are excluded, such as overly specific words that can artificially inflate the score.
If we required 10 valid words, we would not be able to compute a score if there was a single misspelling. Requiring only 7 of the 10 allows for some redundancy; you can provide up to 3 invalid words while still receiving a score. In our initial study, nobody provided fewer than 7 valid words, which is why we chose 7.
A low score simply means you received a low score on this particular measure. There are many different types and aspects of creativity that the task does not measure. No single task can fully capture creativity, which is why in studies we often use several measures at once.
People with lower scores are generally not sharing their results. The results page gives an approximate percentile rank based on our published sample, and the current online sample shows only slightly higher scores. Some people may also complete the task multiple times and then share their best score (some people complete the task over 3,000 times).
The task measures divergent thinking, which is only one process involved in creativity. Like any verbal creativity measure, it partly measures creativity and partly measures verbal ability.
Scores can also be artificially modulated through various strategies, such as using exceptionally rare words or using structurally similar words (e.g., rhyming words or those that start with the same letter). In our studies, we usually give participants a 4-minute time limit to reduce the likelihood that they will think of these various strategies.
We are currently testing how the English version of the task works across different countries, but ideally people would be able to complete the task in their native language. In theory, a corpus could be generated for any language in the world and the task would work the same.
In our previous study, knowledge of multiple languages did not seem to affect the results. In a related study we are currently testing the effect of experiences with different cultures.
Contact us for questions.