Quali-quanti visual methods and political bots A cross-platform study of pro-& anti-bolsobots

Computational social science research on automated social media accounts, colloquially dubbed “bots”, has tended to rely on binary verification methods to detect bot operations on social media. Typically focused on textual data from Twitter (now rebranded as "X"), these inference-based methods are prone to finding false positives and failing to understand the subtler ways in which bots operate over time, through visual content and in particular contexts. This research brings methodological contributions to such studies, focusing on what it calls “bolsobots” in Brazilian social media. Named after former Brazilian President Jair Bolsonaro, the bolsobots refer to the extensive and skilful usage of partial or fully automated accounts by marketing teams, hackers, activists or campaign supporters. These accounts leverage online political culture to sway public opinion for or against public policies, opposition figures, or Bolsonaro himself. Drawing on empirical case studies, this paper implements quali-quanti visual methods to operationalise specific techniques for interpreting bot-associated image collections and textual content across Instagram, TikTok and Twitter/X. To unveil


Introduction
This article addresses the methodological challenges of understanding the "Bolsobots" phenomenon, an extensive and skilful usage of automated accounts that swarm social media environments to successfully sway public opinion (Messenberg 2019;Pereira 2022).This phenomenon has been engendered by the convergence of a polarised political landscape and a unique social media culture, which features extensive online engagement, multi-platform usage, influencer culture, and paid traffic reachability.Bolsobots, as defined here, are social media accounts -partially or fully automated -that promote (or demote) Jair Bolsonaro and his political agenda, allies and opponents, on behalf of specialised marketing teams, hackers/activists, campaign supporters or paid workers.
Despite efforts to curb their agency in the aftermath of coordinated disinformation campaigns in the 2018 general elections, reports show that bolsobots continue to play an increasingly blurred yet omniscient role in the Brazilian online infospheres (Ribeiro and Lobato 2022).If, on the one hand, the boundaries between "bots" and "authentic users" become ambiguous in Brazil's divisive online political militantism; on the other, overlays between automation, authenticity and partisanship pose methodological challenges to any research seeking to identity, follow, profile, measure and, most importantly, account for the practices and influence of bots.
Detection techniques used to capture bots and bot networks most commonly hinge on computational methods that look for suspicious account patterns.They are predominantly developed for text-based datasets extracted through Twitter's -now rebranded "X" -increasingly inaccessible APIs. 1 In spite of their widespread usage, recent debates point out that quantitative and textual analysis of profile metrics cannot capture click farms and other bot-coordinated actions increasingly attuned to particular political debates, platform moderation and or emerging social media platforms, such as TikTok.
In response, we argue that visual and other methods can provide a nuanced perspective that quantitative analysis alone might not capture.They provide means that not only deal with the imagery of increasingly visually-driven social media platforms but, more importantly, make sense of the relations between visual and textual content.Crucial in implementing visual methods is the role of visual models (Colombo, Bounegru and Gray 2023), such as network grids and image walls.These devices demand a navigational 1 Since Elon Musk's Twitter takeover in 2022, the rebranded X has deprecated its Academic API and rendered its Standard API much more expensive and limited in data collection.The scenario in which we collected data was more permissive; the Academic API allowed the collection of 10 million Tweets per month, at no cost.At present, researchers have opted for scrapers, which sometimes suffer from X's attempts at halting violations of its Terms of Use.
procedure that considers the environment where data comes from, the capabilities of the research software in use, and how visualised data is organised, be it spreadsheets, image folders, JSON files or others.

Moreover, visual methods invite researchers to reflect on the steps leading to what is represented in the visualisation, considering what is at stake, what to interpret and what to omit.
This research proposes qualitative-quantitative (quali-quanti) visual methods for bot detection, dataset curation and analysis from cross-platform case studies of pro-and anti-bolsobots.Using network, visual and textual analysis, this article discusses three methodological challenges to capture and analyse bolsobots across Instagram, TikTok and Twitter/X.First, we argue that list-making and dataset-building approaches can be based on bot characteristics and following networks.Second, we explore the challenge of making sense of bolsobots traits through quali-quanti visual methods using a navigational procedure for exploring data and image visualisations as crucial analytical tools.Third, we discuss the insights that can be derived from refraining from distinguishing bots from non-bots, opting for a non-binary perspective on what constitutes an authentic user account.
In the following sections, we revisit the literature on bot studies, formulate a critique of existing bot detection methods, and discuss how quali-quantitative methods may facilitate or innovate such approaches.Next, we introduce and operationalise bot-following network-oriented analysis of Instagram bots, and compare image profiles with account names of TikTok bots.We also present a qualitative approach for analysing Twitter bots, expanding and questioning the effectiveness of automated research practices.Finally, we reflect upon our methodological challenges and the blurred distinctions between bots and non-bots as "automated" or "authentic".
In sum, we argue that studying bots and their visual traits demands constant "quali-quanti readings" (Venturini, Cardon, & Cointet 2015) of the relational nature of platform data and usage cultures while also paying attention to the technicity-of-the-mediums (Omena, 2021).This methodological contribution is grounded in a comprehensive empirical endeavour that tests methods iteratively, acknowledging errors and going through substantial descriptive and reflexive undertakings.Selected case studies underpin a bot methodology that integrates cross-platform, multimodal, and cultural approaches.Rather than prioritising extensive interpretations of case studies documented in previous works, this paper emphasises an in-depth discussion of the methodological underpinnings stemming from our empirical work and findings.

A retrospective of bot studies
Social or political bots are (semi)automated social media accounts that rely on web-based applications to act on their behalf while adhering to the platform's terms of use.They can be programmed to (un)follow, like posts with specific hashtags, provide comments based on keywords, accounts or mentions, and produce or share content, offering "real engagement and users" at different quality levels.They also have been proven to manipulate public opinion, (re)direct attention, generate value, support large-scale advertising strategies, and spread (dis)information in electoral contexts (Howard, Woolley and Calo 2018;Shao et al. 2018;Murthy et al. 2016).
Scholars have employed various quantitative and qualitative methods to study social media bots and measure their impact on information flows.To detect bots, researchers utilise platform grammar and machine learning (ML) techniques, such as neural networks and vector machines, supported by classification algorithms (Akyon and Kalfaoglu 2019).In these studies, models are trained on features like number of likes and posts, following/follower ratios, account privacy settings, username patterns and the length of profile descriptions.Other methods include interviewing bot creators, ethnographic observations, mapping bot automation and purchasing services, i.e. acquiring engagement metrics and followers to capture and study bots (Lindquist 2022;Omena et al. 2019;Assenmacher et al. 2020).
In recent years, there has been growing criticism of the limitations of predominant methodological approaches to social bot studies.The widespread reliance on bot studies and tools on Twitter/X data has been said to hinder the understanding of bot dynamics across increasingly connected social media platforms.Bot detection algorithms may also be unsuitable for capturing the influence and reach of bots in broader networks and specific datasets (Cresci et al. 2023;Gallwitz and Kreil 2022;Gorwa and Guilbeault 2020;Martini et al. 2021;Grimme, Assenmacher and Adam 2018;Rauchfleisch and Kaiser 2020).
Botometer (Yang, Ferrara and Menczer 2022) is worth scrutinising in this context.It is a ML tool that uses a combination of features to assign Twitter/X accounts a statistical likelihood of being bots or humans.One analyses account metadata (e.g., account age, number of followers), content-based features (e.g., use of hashtags, frequency of tweets), and network-based features (e.g., centrality in the Twitter/X network).However, as critics point out, such methods are not entirely accurate because, among other reasons, they fail to classify "borderline" or "hybrid" accounts that are operated by both humans and automation.Rauchfleisch and Kaiser (2020) and our study found that Botometer's thresholds, even when used very conservatively, can return false negatives (i.e., bots being classified as humans) or false positives (i.e., humans being classified as bots).Additionally, the accuracy of Botometer may vary depending on context, as its scores are particularly imprecise in languages other than English.Most importantly, the tool's output may be difficult to interpret, as the underlying ML algorithm lacks transparency (Martini et al. 2021).
To mitigate these shortcomings, a combination of methods has been suggested.These may include manual content analysis, which examines indicative patterns of bot activity in social media posts, and network analysis, which explores coordinated efforts that involve multiple accounts (automated, semiautomated and human-operated).In this regard, Grimme, Assenmacher and Adam (2018) plead for social bot research to move away from inference-based approaches that focus on identifying individual accounts, as they may not be a necessary condition for the overarching goal of identifying harmful, strategic attacks on public opinion.
Accordingly, this paper abandons a binary perspective on what "bots" and "non-bots" are and aims instead to understand the agency and strategies of bots within the specific environments in which they act and exist.To achieve this goal, we argue that quali-quanti visual methods can offer alternative solutions to address the modus operandi of bots, as they enable the exploration of complex relationships and patterns in large datasets that may not be apparent with traditional bot detection methods.

A grasp of quali-quanti methods
Emerging from Science and Technology Studies, quali-quanti methods have been implemented under the Digital Sociology and Digital Methods schools of thought.These methods, coined by Venturini et. al (2015), challenge social theorists' and practitioners' understanding of quantification as they embrace and affirm the integration of qualitative and quantitative approaches as a whole rather than separating them (Latour et al. 2012;Venturini, Cardon and Cointet 2015).Quali-quantitative methods encompass the notion of navigation as a crucial practice.This involves navigating through platform and software interfaces and data points using a provisional visualization that facilitates the analysis of individual data aspects, extending to aggregates and back (see Latour, 2010).Consequently, these methods require tremendous effort from researchers who must, in this sense, face the challenge of "gaining in quantity without losing in quality" (Venturini, Cardon, & Cointet 2015).
Practical questions arise.How can one bridge the broader patterns that quantitative analysis analyses, with the minutest details of qualitative examination?How can method design and implementation ensure that one captures and represents all significant aspects in between?Recent digital methods literature has provided some concrete answers to these questions, for example, network visual explorations, participatory lexicon creation to navigate big textual datasets, and visual models to make sense of image collections (Marres 2020;Venturini, Jacomy and Jensen 2021;Colombo, Bounegru and Gray 2023;Rabello et al. 2022;Moats and Borra 2018).First, one uses data visualisation as a method -not an endto explore, describe, and analyse digital data's relational and contextual nature.Second, one implements navigational procedures for datasets while accounting for the inherent layers of technical mediation of research software.Finally, researchers are invited to engage with digital fieldwork and study the Web from a methodological standpoint, i.e. understanding platforms' grammatisation and (sub)cultures of use and how these relate with the computational media required to implement methods (Omena 2021).
This paper adopts and expands the implementation of quali-quanti methods, operationalising specific techniques for interpreting bot-associated image collections and textual content.Such methods may offer a valuable approach to bot studies to acknowledge the situational and relational contexts in which (semi)automated accounts exist.Moreover, as we argue and discuss in the next section, these methods can facilitate innovation in the ways we study and understand bots.

Conceptual and Analytical Framework
This section situates our proposed methodology within a conceptual framework of method-making and focuses on the challenge of understanding the bolsobots phenomenon and visual vernaculars.Rooted in the digital methods scholarship, this perspective emphasises that methods emerge from an iterative and thorough process of evidence testing rather than being a standalone instrument for empirical research.

Reimagining bot studies with digital methods
Aiming to inform the current state of affairs and modes of agency of bolsobots, the research presented here follows a cross-platform approach (i.e., Instagram, TikTok, and Twitter/X) attuned to the specificities of each platform.This approach is grounded in four principles of bot studies, reimagined through the lens of digital methods scholarship.The first principle involves analysing bot profile characteristics, including the use of digit patterns or similar names in usernames, the use of default platform pictures as profile images (e.g., Instagram's human silhouette or Twitter/X's "egg"), discrepancies in the following-followers ratio, the number or absence of posts, the lack of original posts, among others (see Confessore et al. 2018;Shu et al. 2020;Akyon and Kalfaoglu 2019).The characteristics of bolsobots are taken as an entry point to designing queries and curating bot datasets.
The second principle is that bots follow bots, as scientific and journalistic literature has ascertained by examining the follower-following ratio and automation market services (Akyon and Kalfaoglu 2019;Colombo and Gaetano 2020;Lindquist 2022Lindquist , 2021).Bolsobot's following networks are analysed to consider the socio-technical environment in which they operate, including the visual-textual content associated with them and the profiles of other actors that participate in this assemblage.
The third principle, supported by empirical evidence, suggests that default social media image profiles are a tell-tale sign of bots.Studies of profile images, led by one of the authors -Omena, found that these accounts can be identified by the unique identifier (ID) number of the default profile picture in its URL or by grouping profile images by colour patterns in visualisation software.For example, Instagram's default image ID is 44884218_345707102882519_2446069589734326272_n.Bots with default profile pictures are so-called ghost accounts, which do not require sophisticated profile presentation and operate in unobtrusive modes (Omena et al. 2019;Omena et al. 2021a,b).Finally, the fourth principle of bot studies is the premise that bots change constantly, adopting increasingly humanlike characteristics that hinder their identification through recognisable patterns on detection algorithms.This change is not just superficial, involving alterations on profile pictures or descriptions, but is at the core of their behaviour and communication patterns (Ruediger 2018a;Varol et al. 2017;Cresci et al. 2017;Freitas et al. 2015;Cai et al. 2022).To demonstrate the latter principle, we propose a qualitative research approach to trace changes in discourses, behaviour and strategies on Twitter/X bots identified with Botometer.In so doing, we argue that the core challenge is not just distinguishing them from "human" counterparts but tackling their continuously evolving features in response to the automation market, API changes, platform policies, social trends, and, most importantly, changing conceptions of automation.

Why study Bolsobots?
During the 2018 and 2022 presidential campaigns, pro-Bolsonaro bot accounts employed specific strategies to advance their political agenda, such as astroturfing tactics and divisive narrative propagation (Machado et al. 2018;Lobo and Carvalho 2018;Recuero, Soares and Gruzd 2020).Moreover, by tailoring messages to specific voter profiles and pretesting certain narratives, bolsobots networks harnessed the power of technology to mobilise support and shape public opinion; their strategic alliance with evangelical broadcasting media and emphasis on moral and nationalist issues is said to have further consolidated Bolsonaro's base (Santini, Salles and Tucci 2021).Though limited in their reach, anti-Bolsonaro bot accounts also employed tactics of spreading rumours and memes to promote their agendas.They propagated claims that Bolsonaro's knife attack was "simulated" to conceal a cancer surgery by consistently sharing a satirical article portraying him as the most (dis)honest politician in the world (Ruediger 2018b).
Brazil's distinctive social media culture provides an ideal backdrop for studying the complexities of automated political behaviour.Brazilian users exhibit high levels of online engagement, spending a monthly average of 15.6 hours on Instagram and 20.2 hours on TikTok (Kemp 2022).A multimodal digital landscape, with users accessing an average of 8.7 different platforms monthly, also provides a fertile ground for automated political communication.Additionally, the country boasts a thriving influencer marketing industry, coupled with the prevalence of automated accounts, creating an environment conducive to the widespread dissemination of political content (Grohmann et al. 2022).Lastly, the high reachability of paid traffic -Brazil ranks third globally in paid reachability on Instagram (67.4%) and TikTok (45.7%), and fourth on Twitter/X (10.8%) -highlights the potential effectiveness of bots reaching a sizable audience (Kemp 2022).

Bolsobots' visual vernaculars
From (memetic) profile pictures to (slogan) account names, bolsobots portray Bolsonaro in either a favourable or unfavourable light and often incorporate socio-political symbols (Figure 1).Despite the efforts of social media platforms to restrict their more or less coordinated activities (Euronews, 2021)including by messaging apps such as WhatsApp and Telegram -bolsobots still endure and actively adapt to online cultures and platform mechanisms.Our research identified that bolsobots adopt similar political symbols in profile pictures and repetitive usernames (Fig. 1) as a powerful tool for creating echo chambers that strengthen political ideas within a network (Sunstein 2017, p. 73).In their account usernames, Bolsonaro's name is coupled with slogans related to patriotism, family and religious values and the LGBT community.In profile pictures, avatars of the former president as a memetic persona sideline Brazilian flags and more extreme political stances (Omena et al. 2021a,b).The analysis of bot profile images yields insights into digital practices and unveils the concealed structures, underlying cultural codes, and prevailing meanings within these tactics and strategies (see Aiello 2021).
In light of this, it is crucial to consider the visual role of bot accounts since they employ strategies to amplify messages and ideas artificially (Ferrara et al. 2016).When Joice Hasselmann, a deputy allyturned-foe of Bolsonaro, testified in the Senate about government actors utilising "troll farms" to promote favourable political campaigns, it raised questions about the authenticity of Bolsonaro's online popularity (Barbiéri, Calgaro and Clavery 2019;Militão and Rebello 2019).However, as we propose in this paper, it is essential to recognise that these accounts are not merely fabricated entities.As empirical evidence suggests, they may act as vessels for authentic beliefs, mimicking user behaviour characterised by hyperpartisanship and antagonism (Omena et al. 2019(Omena et al. , 2021a)).

Data and Methods
In what follows, we assume a cross-platform study to analyse data collected from various platforms, each treated as a distinct entity.While the primary study focus and query design strategies share the same topic (i.e., pro-and anti-bolsobots), the methods applied are tailored to accommodate the specificity of each platform, (sub)cultures of use, and distinct forms of appropriation of digital objects.The methodological protocol (Figure 2) shows how case studies were developed with digital methods, illustrating the process of curating, visualising and analysing cross-platform bolsobots datasets. 2The protocol promotes transparent data practices by appreciating the complexity of implementing methods, i.e. the combination of various software and technical practices.
The datasets were compiled with a two-step process involving qualitative and quantitative data collection phases.We employed an active, iterative, and interventionist method in the first step when designing cross-platform queries.The query design was informed by background knowledge of the Brazilian socio-political landscape and digital culture, including bot social cues, platform use, and technological grammar.This process involved active engagement in trying and testing different keywords in each platform to identify meaningful keywords and iteratively incorporate new terms based on a platform's recommended list of existing accounts.
On Instagram, we searched for underspecified pro-Bolsonaro and specified anti-Bolsonaro keywords to return likely bot accounts (see Figure 2).Leveraging preliminary findings (Omena et al. 2021a), we tailored specific query categories on TikTok based on their resonance in the platform.We made a final selection of "seed" bot accounts by verifying bot profile characteristics.For Twitter/X, we collected tweets using a list of pro-Bolsonaro hashtags as queries from January 2019 to July 2022, using the (now defunct) Twitter/X Academic API v. 2. This list contained the same queries as those used on TikTok and Instagram.The second step in the process of dataset building proceeded as follows.The datasets include only publicly available information and no sensitive information about individuals.TikTok's dataset was created using Instant Data Scraper (Web Robots, n.d.), resulting in a CSV file of 1,966 bot-like accounts that included information on profile descriptions, number of followers, usernames, and profile image URLs.We obtained all profile images using DownThemAll (Parodi and Verna 2019).Building the Instagram and Twitter/X datasets involved additional steps.On Twitter/X, a list of 40 pro-Bolsonaro keywords was used to retrieve 3,486,622 Tweets by 170,044 accounts through the Academic API.Subsequently, we used Botometer (Yang, Ferrara and Menczer 2022) to generate "bot scores", retaining 98,760 unique users."Bot scores" indicate the likelihood that a user is a bot based on their post history, profile image, number of followers and followers, and number of retweets.On Instagram, 70 bot "seed" accounts were used as entry points to collect information on the accounts they follow using PhanthomBuster.The resulting dataset includes more than 60,000 accounts that belong to pro-and antibolsobots following networks.Finally, using the Instagram bot following network metadata (i.e., account ID and image URLs), we created additional datasets for profile descriptions, image profiles and associated web entities detected with Vision AI (Google Cloud, n.d.) via Memespector (Chao 2021).An additional step was taken to verify probable bot accounts in the following network datasets.Although it is well-established in bot studies that bots follow bots, statistical tests were conducted to validate the presence of such accounts.Using t-test and f-test, we analysed follower-following ratios of profiles grouped according to two rules: use of numeric usernames and default profile pictures.Accounts matching at least one criterion were classified as "suspected bots", while those matching neither were labelled as "not suspected bots".Both results confirmed a substantial presence of probable bot accounts in the network with a statistically significant difference in the mean follower-following ratios between groups (4.18 vs. 1.93, respectively) (Table 1).Following the dataset building steps, quali-quanti visual methods were tailored for each platform and curated datasets.Instead of treating data as a vague unit, we consider its relational aspects and navigate them through the lens of technical practices, i.e., the usage culture of each platform and the technicity of the research software in use.
For Twitter/X, we repurposed Botometer outputs (Yang, Ferrara and Menczer 2022) with a qualitative approach, using statistical results not as the final but as a starting point.We grouped likely bots and non-bots to analyse changes in how they define themselves as bots or not, filtering tweets that mentioned phrases such as "you are a bot" ("você é robô"), "I am not a bot" ("eu sou robô"), "I am a bot" ("eu sou robô") and similar expressions found throughout the dataset.To examine how bot strategies have evolved, we looked at the frequency and originality of hashtags they propagated over time, and how they countered platform moderation on Twitter/X and Instagram.While the former task was done by counting (unique) hashtags over time, the latter was done by comparing user statuses obtained in February 2022 and in July 2022.
For TikTok, we adopted a navigational procedure4 to interpret the (non) associations of profile images and usernames.Analysis was facilitated by RStudio (RStudio Team 2020), Image Query and Extraction Tool (Chao and Omena 2021), ImageSorter (Visual Computing Group 2018), Google Sheets (Google Docs Editors n.d.), Google Slides (Google Docs Editors n.d.) and the TikTok search engine.
For Instagram, we conducted three interconnected levels of analysis by critically exploring and interpreting profile image collections (1) content and (2) context, and (3) profile description of publicly available information.To visualise and analyse image collections, we used ImageSorter (Visual Computing Group 2018), Memespector GUI (Chao 2021), Google Cloud Vision API's web detection methods (Google Cloud), Google Sheets (Google Docs Editors n.d.), Table2Net (Jacomy et al. 2021) and Gephi (Bastian, Heymann and Jacomy 2009) as described in Colombo, Bounegru and Gray (2023) and Omena et al. (2021a).Aligning with Aiello's (2020) visual semiotics, we zoomed in and out of pro-and anti-bolsobots image walls to explore colour clusters and image repetitions.Following Omena et al. (2021b), we also conducted a network vision analysis examining selected web entities and associated images.Within these networks, images can be grouped together if they are associated with the same web entities.As web entities are offline and online references, they provide profile images with a political and contextual background grounded in trustworthy or authoritative web pages (see Li et al. 2018;Sullivan 2020; Google User Content 2022).
As for profile description analysis, we used BERTopic modelling (Grootendorst 2022) to identify themes, agendas, and ideological views and performed exploratory analyses examining platform appropriation. 5To tackle dataset imbalance, we sampled the larger anti-bolsobots dataset to match the pro-bolsobots dataset, totalling 25,358 profiles.We then trained the BERTopic model on this new dataset.Employing a supervised approach, we identified 46 topics that would most characterise pro-and antibolsobots by examining topic distances and representative words through interactive visualisations.The textual analysis identified themes and political agendas for each group, while exploratory data analysis mapped accounts' usage of emojis, hashtags, profile mentions, and other textual patterns.

Findings
This section empirically illustrates how the proposed methodology offers pathways to study bots in context.This includes understanding how they operate within broader networks, how they mould themselves to correspond with online political identity and identification cultures, how they reprocess elements of online political culture, such as memes, political insignia and other tropes; and, finally, how they regain agency from increasing public scrutiny and platform content moderation.While the methods presented here originate from the bolsobots phenomenon, they can serve as "recipes" for uncovering bots' activities and visual patterns in other research endeavours.This visual method unveils bot strategies by assessing the (non) associations between profile images and usernames.The adoption of navigational procedures in the analysis combines visual models (i.e., image treemaps) with the TikTok interface and spreadsheet consultations.Initial findings of the bot profile image treemaps -where images were scaled by follower count and networks of ghost accounts, guided a compositional image mapping creation (Figure 3), which elucidates the method's findings, detailed below.

Example of bolsobots' methods: combining strategic usernames with presidential, memetic, human-like and ghost accounts (default images) profile images. Compositional image mapping created by querying the bot profile image folder (using the Image Query and Extraction Tool) with specific keywords, such as president, Jair and image ID identifying ghost accounts
TikTok bolsobots rely on relational frameworks -the platform's memetic culture and bot social cues -and a situational context -the presidential election and related political debates -to generate repetitive slogan usernames and profile pictures featuring Bolsonaro-related themes.As a result, bolsobots can simultaneously cultivate a positive image of the candidate and appeal to young and digitally native audiences who are more likely to consume humorous content.
While profile pictures portray Bolsonaro as a respected figure and a laughable character, usernames follow a repetitive logic, including digits or characters or an array of Bolsonaro-related campaign slogans covering topics such as patriotism, family, and LGBT rights (Figure 3).
In a memetic vein, pro-bolsobots rely heavily on the term "bolsominion" in their TikTok usernames.The term combines the Bolsonaro name with the word "minion", a synonym for "follower", and a 10.33621/jdsr.v6i1.215 Published under a CC BY-SA license 61 reference to the yellow army of replaceable servants in an animated movie franchise.Originally used to mock sycophantic Bolsonaro supporters, the term "bolsominion" was reappropriated by Bolsonaro supporters as an endearment of their loyalty to him (Oliveira 2020).We find accounts with human profile pictures and account names, including a person's first name followed by "bolsominion".This naming pattern suggests a shift from using usernames as a personal signifier to a generic group identity that signals loyalty to a political leader.We also found an overrepresentation of TikTok accounts with a default profile image.Known as "ghost accounts", these profiles are frequently used to impose a presence in pro-bolsobots networks while avoiding scrutiny from users and platform moderation.It is important to note that unobtrusive bot accounts may change their profile pictures over time.They might initially feature a bolsominion picture but then switch to a Bolsonaro or human profile before eventually reverting to a default image.
Additionally, a portion of accounts present TikTok feeds with non-political videos, with some including anti-Bolsonaro hashtags.The mismatch between content and username may point to deliberate tactics to attract engagement through buzzwords.At a small scale, these findings show how bolsobots' strategies instrumentalise visual (profile image) and textual (username) online presence.

Bot following network method: Image Walls and the value of image repetition
Image walls reveal the significance of the dominant profile images of bots and their followee accounts clustered by colours and repetition.They also uncover the visual tactics of bot following networks (Figure 4), supported by a visual semiotic analysis and a navigational procedure.Here, image repetition indicates the presence of multiple accounts using similar profile pictures or one account being followed by many others.Whether they are "stolen" or AI-generated, these images have the potential for a close examination of individual visual elements, such as colour and style, to identify networked strategies of bot accounts, with crucial implications for understanding their presence on online visual culture.
First, in the case of bolsobots following networks, the use of colour symbolism and repetitive visual elements on profiles act not only as expressions of partisanship but also as cues of the bot-market attunement to evolving discursive disputes between both sides of the political spectrum.Second, the reliance of bot accounts on human-like images exemplifies an attempt to blend in and interact with regular users while conveying a sense of authenticity in the attention economy of social media.
Upon an initial overview of the image walls, it is unsurprising to observe that dominant colour clusters reflect Brazilian political polarisation.Each following network assigns ideologies, identities, and meanings to their respective colour associations.Pro-bolsobots are clustered in green, which synthesises Bolsonaro's conservative agenda and praise for the national flag.Conversely, anti-bolsobots cluster around red, historically associated with leftist parties and, in Brazil, Lula's Workers' Party (PT).
Nevertheless, a closer reading of colour clusters reveals that symbols are also subject to (re)signification strategies.Similar to hashtag wars, where a trending tag is reappropriated into its opposite meaning,6 profile pictures have been used to empty or reclaim ownership of certain symbols.Associated since the 2018 election race with Bolsonaro's campaign, the Brazilian flag has been progressively reclaimed by the president's opponents (Soares 2020).Anti-bolsobots following networks also follow this trend, adopting the Brazilian flag with specific features to express their political stance.Symbols appear covered in blood and marked with the words "democracy" and "mourning".Moreover, Bolsonaro's figure is "memefied" as an evil clown but portrayed as a hero and emperor in his supporting bot following network.Likewise, the Antifa symbol appears on the red cluster aligned with its antifascist original intent, and on the green resignified as an identifier of anti-leftists, accompanied by slogans such as conservative, anti-leftist, and anti-terrorist.Traffic signs -another repetitive imagery pattern -also point in both political directions.There is constant symbolic feedback between opposition groups from the right-wing "turn right" sign, to the countering "turn left" and the counter-countering "do not turn left".
The analysis of human clusters reveals photos sharing a consistent visual "style" (Manovich 2016) featuring close-up headshots of an individual against a white background.This image usage allows bot accounts to maintain the illusion of being human even when displayed in small sizes, such as in Instagram comment sections.Moreover, duplicates of political figures in the human cluster may serve as important visual symbols to express and attract users from pro-and anti-bolsobots camps.It is important to note that the significance of image repetition in the following network datasets indicates the presence of multiple accounts using similar profile pictures or one account being followed by many others.

Network vision analysis for capturing web references of bots' visual repertoires
Network vision analysis (Figure 5) reveals the relationship between pro-and anti-bolsobots profile pictures and their broader web ecosystem, encompassing political, ideological, and cultural dimensions.Instead of solely focusing on the visual representation and aesthetics of the images, as in image wall analysis, this type of investigation provides a more comprehensive understanding of the context in which the images are shared, disseminated, and interpreted online.The results offer insights into pro-and antibolsobots visual repertoires through the lens of web entities -both conceptual (e.g., politics, political party, democracy) or descriptive of real-world individuals, institutions and objects (e.g., member of the Chamber of Deputies of Brazil, president of Brazil, flag of Brazil).
A first look at the network node colours exemplifies how web entities, such as politics, Brazil, democracy and Brazilian flags, reflect offline and online political topics.For example, among probolsobots profile images (in yellow), we notice a recurrent symbolic association in real-life rallies and web engagement: the dominant presence of the Brazilian flag.Also stemming from this principle is the complete dissociation of the entity "democracy" with pro-bolsobots images, as Bolsonaro statements are frequently recognised as authoritarian and undemocratic by national and international media.
A second network visual exploration deploys Bolsonaro images as dominant among Brazilian politicsrelated web entities and in both pro-and anti-bolsobots image collections.Filtered networks reveal Bolsonaro's face portrayed in various facets, either in favour or against him.This includes images of Bolsonaro as a victorious politician, a family or people's man, funny pop artistry and cartoonish memes, as well as poop emojis and profile images depicting him as a senseless, maniac or incompetent president.In the context of web entities, these images play a twofold role: they may enrich the web's political memetic cultures and spark offline attention, conversations or debates in support of (or opposition to) Bolsonaro; and they may also reinforce his online presence, feed Instagram algorithms, and boost his online content and visibility.
Similarly, analysis of images tagged as "politics" and "political party" contain logos of historical political parties (e.g., PCB), as well as Bolsonaro's (then) party-in-the-making, Aliança pelo Brasil.This indicates that pro-bolsobots following networks are capable of fabricating the online presence of an aspiring political party to such an extent that they trick vision AI into recognizing their images as well as those of parties that have been established for more than a hundred years.

Profile descriptions for mapping bot political ideologies
Analyses of Instagram's profile description allowed us to map ideological differences and platform appropriations between pro-and anti-bolsobots following networks.The analysis points to pro-bolsobots aligning with conservative values and economic sectors of society.At the same time anti-bolsobots exhibit diverse and more progressive agendas, united primarily by opposition to the president.The proand anti-bolsobots accounts differ not only in the types of emojis, hashtags, and profiles mentioned in their bios but especially in how they combine these usage patterns in their profile descriptions.

Figure 6
Profile description topics of bot-following networks on Instagram Anti-bolsobots profiles generally align with progressive agendas and fill their bios with information about art, self-care, environmentalism (e.g., meditation, yoga, veganism, biology), vaccination, and wearing masks (Figure 6).In contrast, pro-bolsobots profiles often identify as police officers, firearm supporters, business owners, entrepreneurs, dentists, or individuals linked to agribusiness and anti-corruption.Overall, pro-bolsobots display fewer themes, suggesting a more orchestrated approach than the diverse anti-bolsobots' agendas.This consistency is also evident in their usage of emojis.Both groups use the Brazilian flag (), but pro-bolsobots use it repeatedly () and combine it with other symbols ().Anti-bolsobots use the Brazilian flag less repetitively and often associate it with a heart emoji (❤).An index finger pointing downwards () is also used by both groups, but with anti-bolsobots, these emojis point to a broader range of external links (e.g., personal websites and e-commerce platforms).
While pro-bolsobots tend to point to Bolsonaro's campaign website and channels on YouTube, Telegram, and Facebook.
Hashtags and profile mentions also point to different appropriation patterns.Pro-bolsobots commonly mention the official profile of the president and the Aliança pelo Brasil, whereas anti-bolsobots rarely mention Bolsonaro's political opponent Lula or his Worker's Party, targeting instead Brazilian soccer club profiles.By looking at Botometer scores of users from 2017 to 2021, it becomes evident that bots have leveraged political tipping points, capitalised on platform trends and exploited moderation loopholes to maintain a nuanced yet impactful presence on Instagram and Twitter/X.By examining the distribution of bot-like and non-bot-like profiles, we see that these accounts have tended to generate and deploy new hashtags in relation to events on the ground.In Figure 7, accounts that were likely bots have tended to generate and disseminate new hashtags over the years, particularly during periods of more political salience for Bolsonaro.In 2021, for example, over 50% hashtags were generated by profiles with a botscore exceeding 0.75.This indicates an increased presence of bots on the platform and transition from trend followers to trend setters.

Figure 7
Box plot showing the Botometer score of profiles that tweeted new pro-Bolsonaro hashtags per year on Twitter/X.The higher the score, the more likely the user is a bot Among the many strategies employed to evade platform or public scrutiny, bolsobots have tended to engage directly in debates about what or who is a bot (Figure 8).In response to scrutiny by news media (e.g., Globo, CNN), bot detection algorithms (e.g., Botometer or Bot Sentinel) or Bolsonaro opposition figures, some bots will claim that they are ordinary and authentic people, such as housewives and working-class Brazilians.In doing so, they accuse bot detectors of a kind of elitism that consists in mischaracterizing earnest Brazilian people as vehicles for disinformation.They will also appropriate accusations of being bots in an effort to ridicule others for thinking that any political opposition must be inauthentic.These deflections appear in periods of political vulnerability for Bolsonaro, such as in instances when his political campaign faced scrutiny for using coordinated inauthentic behaviour networks across social media (G1 2020).To further evade moderation, we find that, depending on the platform, bots may remain "dormant" in periods of low political salience and are then reactivated when necessary (Figure 9).That is, they may switch to private mode or be temporarily suspended until they occasionally resurface under different usernames.These findings underscore that moderation efforts do not necessarily result in the permanent eradication of bots, but in the same way that they actively shape public discourse about bots, they actively self-moderate to remain on the platform.Both of these examples show how bots seek to regain agency over public and platform efforts that scrutinise their authenticity.

Figure 9
Account statuses of Twitter/X and Instagram bots, May 2020 to mid-2022

Bot Methodologies: Learning from Findings
In contrast to prevailing methodological approaches that primarily rely on research-friendly APIs and offthe-shelf detection tools to study bots, our findings show a range of quali-quanti visual methods to creatively investigate the agency of bolsobots and their strategies on Instagram, TikTok and Twitter/X.To unveil their modus operandis, we mapped their following networks, explored visual-textual content in profile pictures (pattern repetition), account names (slogan usernames) and profile descriptions (textbio, emojis, hashtags, links), and observed the strategies they develop in response to ongoing public and platform scrutiny.Overall, the results show how pro-and anti-bolsobots sustain decentralised content generation practices that aim to sway public opinion continuously and consistently.Central to bolsobots' tactics is indeed the use of automation to systematically reproduce user visual cultures at a mass scale and be perceived as authentic or ordinary users.On TikTok and Instagram, bolsobots systematically appropriate the profile pictures of "real" human peers to maintain the illusion of being humans even when shrunk to thumbnails.They create identities from a collage of organic online subcultural tropes, including memes and catch-phrases such as "bolsominions".In other instances, the use of political imagery, though concocted, plays a role in fooling AI verification systems into classifying their images as genuine symbols.Meanwhile, ghost accounts can easily switch between any one of a range of political identities by simply altering profile pictures.
Bolsobots also work to stay on top of socio-political debates by constantly adapting their online discourse.When public debate on the usage of bots for political campaigns erupted on social media around 2019, Twitter/X bolsobots deflected attention away from them by embracing accusations of fakeness as a sign of authenticity or framing such accusations as inauthentic -that is, as detached, elitist smears against authentic and hard-working Brazilian citizens.Bolsobots also actively adapted to changing online political agendas by evolving from trend followers to trend setters.On TikTok, for example, bolsobots profile pictures sought to appropriate or "re-signify" key political imagery.And finally, bots can conveniently switch their profiles "on" or "off" to evade content moderation at politically salient moments.
These findings show how the interplay between automation and organic social media political culture makes verification methods limited.Verification, in the form of bot likelihood scores, for example, is limited by quantitative bias and its tendency to separate its object of study (bots) from the socio-technical environment in which it continuously evolves.This is why we have chosen digital methods that do not separate bots from their user and platform cultures.We have attempted to repurpose verification tools, platform search engines and a range of metadata to show how bots are ultimately a product of their own socio-technical environments.These findings and methodological devices can offer valuable insights to bot research by recognizing that digital culture phenomena, like the one under investigation, emerge from the intricate interplay of diverse elements facilitated and restricted by participatory media platforms, users, devices, and information practices in both physical and digital realms (Marres 2018).

Conclusion: Methodological Takeaways
This paper has critically addressed alternative methods for current bot detection and analysis techniques, namely, using quali-quanti visual methods.Further, it has provided a detailed bot methodology operationalisation drawing on a cross-platform study of bolsobots.It concludes by presenting three main takeaways that form the seminal proposal of reimagining bot studies with digital methods as a way to overcome the challenges posed by these studies.
The first methodological takeaway of this study is suggesting alternative solutions for dataset building beyond automated methods (e.g.quantitative analysis with Botometer), particularly when dealing with platforms with limited API access and scraping blockers.Rather than proceeding with data extraction at scale, we developed a context-sensitive query design and dataset curation method with platforms' interfaces and search engines that relied on the social cues of bots, visual stereotypes, and following networks.Methodological decisions are dependent and responsive to these factors.Grounding research work on such practices proposes a shift in bot research methods, where the starting point is no longer the outputs of bot-scoring detectors but socio-technical knowledge.
As methodological fieldwork, social media search features mediated our work in identifying keywords embedded with cultural and political resonance in this particular issue space.The selection of keywords was essential for listing bot accounts.However, since each platform responds differently to search queries, contextual keywords underwent an iterative process.Thus, we included or sidetracked keywords; for instance, "direita da opressão" was included on Instagram in July 2021 but dropped on TikTok in February 2022.Account-based queries, which utilise repetitive bot usernames, showed promise as a technique for building bot datasets, though further exploration is necessary.
Following networks led to the discovery of a diverse ecology of bots, consisting of mainstream (i.e.seed accounts) and unobtrusive bots.These bots looked like humans, "ghost", memetic and random accounts, and ordinary people and public figures.Yet, this approach was prone to technical limitations.While scraping following networks on TikTok is impossible, it is restricted on Instagram.Each account can only return up to 7,000 followers, and the account scraping is at risk of being blocked if a threshold is exceeded.For this reason, scraping the public profile information of the following networks took over a year.
The second takeaway is making sense of bolsobots' characteristics with quali-quanti visual methods.To enhance data-critical analysis within socio-technical environments, we considered creating visual data, while navigating the intricate layers of technical mediation in research methods.A crucial interpretative step is not only to acknowledge the elements of data assemblage (D'Ignazio and Klein 2020; Kitchin and Lauriault 2014) but also to recognize the co-creation process with and about research software.This includes understanding the epistemological dimension of software in research practices (Omena 2021).In addition, it is necessary to employ multiple, complementary visual methods to provide richer contexts, including qualitative engagement with data visualisations through a navigational procedure, yet understanding how our decisions impact what we see.
In the analysis of bot-following networks profile image collections, two methods served complimentary purposes: image walls offer a possibility to verify the aesthetics of images, whereas computer vision networks enrich this process with the context in which images are shared and interpreted online.Moreover, the interpretation of image repetition, either grouped by colour or tagged by web entities, was guided by practical knowledge of the dataset building (i.e. one account may be followed by others, and several followers may share similar images) and of the algorithmic outputs in use (e.g. computer vision and what web entities mean).
Finally, we explore how visual methods challenge the boundaries of bot-oriented research and image analysis.While a non-binary perspective offers valuable insights, there are appropriate occasions to distinguish between a bot and a non-bot account.Using digital methods' technical imagination helped us navigate this challenge.For instance, when examining the images and text in the bot following networks, we considered outputs to be part of a socio-technical environment where both bot accounts and regular users coexist and cannot be distinguished except through close examination.Conversely, when examining human clusters based on visual cues, results pointed to clusters of stock-like headshot pictures combined with white backgrounds, indicating a repetitive pattern in image appropriation and, thus, potential humanlike bot accounts.
In conclusion, we argue that these methodological takeaways may broaden the range of methods available to comprehensively analyse bots across social media platforms.By encapsulating the principle that methodological inquiry is intrinsic to empirical investigation and vice versa, this paper contributes to method innovation on bot studies and emerging quali-quanti visual methods literature.

Figure 1 .
Figure 1.What do bolsobots look like?Examples of accounts on Instagram, TikTok and Twitter/X.

Figure 2
Figure 2Methodological protocol for capturing, visualising and analysing political bots.3

Figure 4
Figure 4Bolsobots visual vernaculars on Instagram: profile pictures of bots and their followees

Figure 5 .
Figure 5.Bipartite network of bolsobots' Instagram profile pictures and associated web entities

Figure 8
Figure 8 Users and (likely) bots who claim they are or are not bots on Twitter/X between 2019 and 2022).Every dot represents a tweet posted by one user.The colour indicates Botometer scores

Table 1
Descriptives of number of followers, number of following and follow-follower rations of accounts suspected as bots and not suspected as bots using f-test and t-test.*** denotes p-value < 0.001