Abstract |
The online crowdsourcing platform Amazon Mechanical Turk (MTurk) assigns a "master" qualification to workers who have outstanding task performance records. However, prior research comparing MTurk's master and regular workers has shown inconsistent results regarding actual performance differences between these groups. Furthermore, studies have used a survey method and research comparing cognitive task performance between MTurk masters and regular workers is still limited. The current study compared the performance of MTurk masters, regular workers, and offline-recruited university students using a visual recognition memory task. Results showed comparable memory performances between MTurk masters and offline participants. However, MTurk regular workers exhibited a different pattern of results from those of the masters and offline participants. Consistent results were found after excluding low-performing participants from each group. These findings suggest that appropriately screened online participants can effectively replicate results from traditional offline experiments. However, the results also underscore that online crowdsourcing platforms such as MTurk are made up of heterogeneous participant groups, suggesting that study outcomes may vary depending on participant selection criteria. |
|
|
Key Words |
온라인 실험, 아마존 메커니컬 터크, 집단차, 시각 기억, Online Crowdsourcing, Amazon Mechanical Turk, Group Difference, Visual Memory |
|
|
|
|