Evaluative practices and performance indicators for digital platforms 1

Photo by Erik Lucatero on Unsplash

By Christos Begkos and Katerina Antonopoulou

Rankings, ratings and reviews permeate digital platforms. They frame what we buy, where we eat, how we travel, how we consume. Their function is simple yet powerful. They streamline complex structures, quantify qualitative traits, place things in hierarchical orders and popularise their subject matter. Digital platforms offer evaluative infrastructures that ‘consist of an ecology of accounting devices in the form of rankings, lists, classifications, stars and other symbols (‘likes, ‘links’, tags, and other traces left through clicks) which relate buyers, sellers, and objects’ (Kornberger et al., 2017, p. 81; Constantinides et al., 2018). Yet, in a digital setting without rankings, lists and other classifying devices, what constitutes ‘good’, ‘successful’ or ‘popular’ and how do people perceive such abstract notions?

In our recent work, we explored this question through an in-depth investigation of Instagram, which, unlike most other digital platforms, is devoid of rankings, ratings and reviews. Instagram only has a few ‘judgement’ devices (Karpik, 2010) available to accommodate commensuration, such as users’ number of followers and likes. Yet, Instagram users can easily ‘game the system’ and fabricate such performance measures, for example, through purchasing metrics from third-party sources. Our study investigated how Instagram users evaluate digital platform content in the absence of well-defined performance measures.

Our findings indicate that lay Instagram users broadly perceived that large volumes of likes, comments, but mostly followers, were tangible indications of ‘successful’ user accounts. Such perceptions led to ‘tit-for-tat’ tactics through which users would seek the reciprocal exchange of likes, comments and follows between unconnected users, to up their metrics. Users also engaged in activities which were often frowned upon by the Instagram community, such as engaging in ‘follow-unfollow’ and lobbying tactics. Users often procured engagement to fabricate their visible metrics, reflect a seemingly successful account and blur others’ evaluations of their performance.

Taking advantage of lay users’ understandings, our findings indicate that experienced users would often mould their profiles’ visible metrics into facades of what is perceived as a successful profile. The absence of well-defined performance measures encouraged experienced users to engage in calculative practices in search for ‘cheating’ signs and to discern legitimately successful profiles, such as ‘follower-following’ ratios and ‘follower-like’ comparisons.

Our research argues that the lack of rankings, ratings and reviews welcomes users’ calculative practices that aim to evaluate performance and assess the credibility and trustworthiness of ill-defined performance measures. The lack of classification devices forces actors to devise their own performance measures, thus invoking an ‘ecology’ of implicit calculative practices (e.g., ‘follower/following’ ratios). Our study argues that, although such practices may lack specificity or credibility, they are powerful enough to bring together multiple actors, businesses and teleologies. 

Our findings provide insight on how Instagram users fabricate performance metrics, what they perceive as ‘good’ online content and what constitutes an ‘impactful’ user account or a ‘successful’ social media campaign. Such findings are valuable to entrepreneurs and practitioners who seek to evaluate digital platform performance in the absence of robust judgement devices.


Dr Christos Begkos is an Assistant Professor in Management Accounting at Alliance Manchester Business School, University of Manchester, UK. His research explores the nexus of performance measurement and digital technologies in healthcare, entrepreneurial and social media settings. His research has been published in Critical Perspectives in Accounting; Accounting, Auditing & Accountability Journal; Technological Forecasting & Social Change; and Public Money & Management. Christos can be contacted at Christos.Begkos@manchester.ac.uk and on Twitter @christos_begkos.

Dr Katerina Antonopoulou is an Assistant Professor of Information Systems at University of Sussex Business School, Brighton, UK. Her expertise is in the areas of digital innovation, business models and digital business strategy while her research interests include topics related to digital transformation as well as digital entrepreneurship. Katerina’s research has appeared at high quality journals and conferences in the field such as Technological Forecasting & Social Change; Accounting, Auditing & Accountability Journal; the International Conference on Information Systems; the European Conference on Information Systems; and the Academy of Management. She can be contacted at k.antonopoulou@sussex.ac.uk.

This blogpost has drawn from research that was presented in the following publications: 

Begkos, C. and Antonopoulou, K. (2020). Measuring the unknown: Evaluative practices and performance indicators for digital platforms. Accounting, Auditing & Accountability Journal, 33(3), pp. 588-619.

Constantinides, P., Henfridsson, O. and Parker, G.G. (2018). Introduction—Platforms and infrastructures in the digital age. Information Systems Research, 29(2), pp. 381-400.

Karpik, L. (2010). The economics of singularities. New Jersey: Princeton University Press.

Kornberger, M., Pflueger, D. and Mouritsen, J. (2017). Evaluative infrastructures: Accounting for platform organization. Accounting, Organizations and Society, 60, pp. 79-95.

One comment on “Evaluative practices and performance indicators for digital platforms

  1. Pingback: IFIP WG9.5 “Our Digital Lives” Blog - IFIP News

Comments are closed.