Archive for October 5th, 2010

Methodology: Draft/Free writing (part 5)

Posted by Laura on Tuesday, 5 October, 2010

This feels mostly finished in terms of content to add to it. I need to get feedback from my supervisors regarding what is missing or what could be improved. (I know it isn’t perfect. I just wanted it written so I could get on with editing it and realizing where the problems are.) Any and all feedback is appreciated.



Methodology


Types of Social Media Research

When conducting social media research, there are ten general methods that can be used to gather and analyze data. These are:

  1. Individual case studies for how a business uses social media and the web;
  2. Search and traffic analytics analysis;
  3. Sentiment analysis and reputation management;
  4. Content analysis;
  5. Usability studies;
  6. Interaction and collaboration analysis;
  7. Relationship analysis to try to determine how people interact and to identify key influencers;
  8. Population/demographic studies;
  9. Online target analysis of behavior and psychographics; and
  10. Predictive analysis.

Each of these methods offers insights into various aspects of the web and its population. The type of analysis used is often specific to the purpose of the research, involved blended approaches from traditional analysis types, and different methods are often used in conjunction with each other. These methods often blend quantitative and qualitative analysis. Choosing the correct method of gathering analyzing data can be one of the biggest hurdles for being able to measure ROI and understand how a community works.

This section will provide a brief summary of each type, explain how to conduct this type of research and give examples that used that methodology.

Individual case studies for how a business uses social media and the web.

Case studies on social media usage are often done to measure the effectiveness of specific actions taken by an organization.

Bronwyn et al. (2005) say case studies “typically examine the interplay of all variables in order to provide as complete an understanding of an event or situation as possible. This type of comprehensive understanding is arrived at through a process known as thick description, which involves an in-depth description of the entity being evaluated, the circumstances under which it is used, the characteristics of the people involved in it, and the nature of the community in which it is located.”

This methodology often incorporates components of all the other methods discussed in this section. The specific methods often depend on the goals of the person or organization conducting the case study.

Vincenzini (2010) did a case study regarding the use of the social media by the NBA in an attempt to define why they have been successful in using it to promote the league. The author used quantitative analysis to measure the size of the community, the volume of content they were viewing on sites like YouTube and the volume of content they were creating on sites like Twitter. The quantitative analysis was synthesized with explanations from NBA employees to explain their practices in the context of their own business decisions as they pertained to social media. This was followed up with an explanation as to what worked and what did not worked and offered advise for others involved with sport and social media to help them leverage their own position.

Case studies are a mixed methodology approach, borrowing from other approaches. The major difference is that the case study focuses on a narrower perspective with the goal of tracking behavioral changes, or in advising others on how an organization changed practices and how those lessons can be applied elsewhere.

Search and traffic analytics analysis.

Search engine and traffic analytics generally is done internally to determine how to optimize a site in order to increase the amount of visitors a site gets and the total number of pages that they view. This method involves identifying how people arrive at a specific site and the pages they visit while at the site. Traffic analytics analysis often includes six different components: Search engine visitors, paid search advertisements, pay per click, organic traffic, direct traffic and internal site traffic.

Ramos and Cota (2009) define traffic analytics as “Tools that analyze and compare customer activity in order to make business decisions and increase sales. Analytics tools can report the number of conversions, the keywords that brought conversions, the sites that sent converting traffic, conversion by campaign, and so on.”

There are a number of different methods and tools that allow for this type of analysis. Early in the history of the Internet, one of the most popular tools and methods involved analyzing web server log files. (Jansen 2009) Another popular early method of analysis was page tagging, which involved embedding an invisible image on a page, which, when the image is triggered, “triggers JavaScript to send information about the page and the user back to a remote server.” (Jansen 2009) These earlier tools have advanced a bit and now include tools like Quantcast and Google Analytics. Kaushik (2010) recommends Google Analytics, a free tool that involves putting a bit of code on all pages of a site. Kaushik (2010) points out that various types of traffic analysis can be done using the various tools provided by Google Analytics. The author claims that Google Analytics allows you to break the analysis down into “three important pieces: campaign response, website behavior, and business outcomes.” (Kaushik 2010)

Fang (2007) completed a case study at the Rutgers-Newark Law Library in order to track library website usage, track visitor behavior and determine how to improve the website to better serve users. Earlier work done by the library had involved surveys handed out to patrons, analysis of log files, and the use of counters. (Fang 2007) The author changed methods because of some inherent flaws in using those methods to analyze website needs. They used Google Analytics in order to track user activity on the library’s website. The library “found out how many users were accurately following the path we had designed to reach a target page.” (Fang 2007) This sort of path following navigation was one of the goals they had when they designed their site. They also found out that “Visitor Segmentation showed that 83% of visitors were coming from the United States. About 50% of U.S. visitors were from New Jersey, and 76% of these were from Belleville and Newark. These results matched our predictions for patrons’ geographical patterns.” (Fang 2007) The results of this analysis enabled the library to make changes to improve their website.

This type of methodology lends itself more to a case study approach and often requires the consent of the website involved in order to get private logs. It can be used in conjunction with other methods, but should be used in a more targeted analysis of highly specific research areas.

Sentiment analysis and reputation management.

Sentiment analysis involves identifying content related to a topic and identifying the emotion connected to that content. In a sport context, sentiment analysis could involve determining if newspapers are providing positive or negative coverage of a team. In a social media context, sentiment analysis would involve determining attitudes being expressed on Twitter in individual tweets. Reputation management goes one further: After sentiment has been determined, a decision needs to be made on if and how negative and positive sentiment content should be responded to. Sentiment analysis is passive analysis where non-stakeholders can conduct analysis. Reputation management is active analysis that is primarily conducted by stakeholders as part of on going activities to improve a brand, be it personal or corporate.

While sentiment analysis and reputation management are similar in their desire to monitor a response to a situation, the tools available vary differently for each type. There a variety of different tools for sentiment analysis. One of the tools for conducting sentiment analysis are freely available lists of words “that evoke positive or negative associations.” (Wanner et al. 2009) Sterne (2010) suggests that content being ReTweeted on Twitter can be seen as a tool to measure positive sentiment. Sterne (2010) suggests that the ratio of follows/followers is not an effective tool for measuring sentiment on Twitter. Reputation management tools include Trackur. It allows you to “set up searchers and the system automatically monitors the Web for key words that appear on news sites, blogs, and other social media.” (Weber 2009)

Wanner et al. (2009) did a sentiment analysis of RSS feeds that focused on the 2008 United States presidential elections. They selected 50 feeds connected to the elections and collected updates to these feeds every 30 minutes for one-month starting 9 October 2008. For each item they collected off the feeds, they also recorded the date, title, description and feed id. (Wanner et al. 2009) After that, they eliminated all noise, which mostly consisted of non-content like URLS. (Wanner et al. 2009) The next step was to filter out all items that did not contain one of the following terms: “Obama”, “McCain”, “Biden”, “Palin”, “Democrat” and “Republican”. (Wanner et al. 2009) Sentiment was then analyzed using freely available lists “that evoke positive or negative associations.” (Wanner et al. 2009) The results were then visualized. Five events that happened during this period were chosen for a more detailed visual examination. They found that the news regarding possible abuse of power by Sarah Palin in Alaska resulted in many negative posts. They also found that the debates resulted in low sentiment scores for both candidates as the candidates attacked each other. The authors concluded that the visual tool they created would be useful for monitoring public debates.

This methodology can overlap with influencer identification (Weber 2009) as part of reputation management involves determining which people are worth responding to. It can also overlap with psychographics. Despite the obvious overlaps, this type of research often appears independently and not as part of a larger study.

Content analysis.

Content analysis involves looking at the individual components of something larger and analyzing it. In a social media context, the content could be comments on a Facebook fanpage, or all the tweets made by a person or group. Content analysis can be either qualitative or quantitative, depending on the purpose of the research.

With content analysis, the researcher views data as “data as representation not of physical events but of texts, images and expressions that are created to be seen, read, interpreted, and acted on for their meanings, and therefore be analyzed with such uses in mind.” (Krippendorff 2007) Krippendorff (2007) defines the basic methodology used in content analysis as unitizing, sampling, recording, reducing, inferring, and narrating.

An example of content analysis is a 2009 study by Kian, Mondello, & Vincent. It looked at ESPN and CBS’s internet coverage of men and women’s NCAA basketball tournament, also called March Madness. The methodology was spelled out by the authors as: “All 249 (N D 249) byline articles from CBS SportsLine and ESPN Internet were read, coded, and content analyzed to determine the descriptors in Internet articles.” (Kian, E., Mondello, M., & Vincent, J., 2009) The authors used multiple coders to help prevent bias in terms of interpretation of gendered language. The two sites in the sample were chosen because they were the largest. All types of March Madness content was included. Only the text of the content was included. Titles and authors were not. Categories for encoding gendered language were based on previous research by sport media researchers. Only descriptors were used for encoding. Totals for gendered descriptors were then calculated and an analysis was completed.

This method of analysis is can be done with other forms of analysis like sentiment analysis, as part of a usability study or collaboration study. It can also be done separately. It often appears most successful when done separately as part of a larger study to help provide context for other data analysis.

Usability studies.

In a social media context, usability studies look at how people use some aspect of the Internet or software that connects to it.

According to Sweeney, Dorey and MacLellan (2006), one of the purposes of a usability study is “point out specific usability problems with your Web site interface in line with how well your Web site speaks to your audience and their goals.” Jerz (2002) cautions that “Simply gathering opinions is not usability testing — you must arrange an experiment that measures a subject’s ability to use your document.” That caution also explains the general methodology of a usability study outlined by Jerz (2002): Collect both quantitative and qualitative data. The quantitative should involve some type of measurement. The qualitative should allow testers to express their opinions. Jerz (2002) suggests that you use at least five tests for the first run. Then, after fixing errors and problems based on tester feedback, you get another five testers to test the site to determine that those errors have been fixed.

An example of a usability study is one conducted by Sturgil, A., Pierce, R., & Wang, Y. (2010). The study tried to determine how much content readers of Internet news sites really wanted. The methodology involved conducting a focus group, and think-aloud sessions. In both cases, the researchers observed participants using Internet news sites. They also asked them questions regarding what content they visited and why. The methodology relied heavily on qualitative analysis with a small quantitative component.

Usability studies can be done in conjunction with traffic analysis and search analytics as the purposes are often similar: Improve the user experience and try to get users to complete certain tasks.

Interaction and collaboration analysis.

Interaction and collaboration analysis focuses on how people work together in an online environment. Collaboration analysis often looks more at how people work together to create something, such as contributing to a wiki or to create an event like an unconference, where everyone is working towards a common goal. Interaction analysis tends to focus on how people engage each other when there is no common goal in the group.

Software Services, Dale Carnegie & Associates, Inc., & Shedletsky, L. (2000) explain the methodology for interaction analysis. They encourage researchers to look at topics discussed, purposes of individuals’ utterances, structure of conversation, and how properties of talk affect outcomes when completing an interaction analysis. The researcher should determine the setting for which this type of analysis will be conducted: In a controlled setting such as a laboratory, by selecting samples of existing conversations, or by examining all conversation that the research is capable of overhearing. The researcher needs to determine if they will use prompted or unprompted interaction. They also need to determine how they will record conversations and if their analysis will be quantitative or qualitative in nature. Once these things have been determined, then a methodology for data collection can be figured out.

Viégas et al. (2007) did a collaboration analysis focusing on Wikipedia. The purpose of their work was to examine historical editing patterns and how editing practices have evolved over time. They built on work done by Viégas, Wattenberg and Dave in 2003. The methodology they used involved getting the editing history of articles across several different Wikipedia namespaces. The history of the articles was then examined using several visualization tools, metrics and methods depending on the established cultural practices for that namespace. One tool they used was a history flow visualization application. A method they used was the manual classification of “all user posts in a purposeful sample”. (Viégas et al., 2007) Metrics they used included count of horizontal rules, signed user names, new indentations levels, votes in polls and total “references to internal Wikipedia resources.” (Viégas et al., 2007) These tools, metrics and methods allowed them to examine how collaboration and interaction had changed over time.

This type of analysis often stands alone. It could be used as part of a usability study or relationship analysis to provide context for the results of those analysis types.

Relationship analysis.

Relationship analysis involves examining the relationships between users on a social network, message board or mailing list. The goal is to identify cliques of different sizes or people who are particularly influential in a particular group online. This type of analysis is important to many brands including Starbucks (Plimsoll, 2010). The purpose of relationship analysis is to identify key influencers and social who influencers who are or who have the potential to be brand evangelists. (Plimsoll, 2010)

Lord and Singh (2010) define social media influence marketing as being “about recognizing, accounting and tapping into the fact that as your potential consumer makes a purchasing decision, he or she is being influenced by different circles of people through conversations with them, both online and off.”

The methodology for influence identification is not clearly spelled out as identifying influencers can be heavily dependent on the network being examined and how the community on a specific site functions. As a result, social media marketers suggest an array of tools like Twitalyzer that can be used to help determine your own influence. (Ankeny 2009) Twitalyzer’s Peterson and Katz (2010) explain their site-specific method of determining influence as including the following variables: Engagement level, total followers, total following, hashtags cited, lists included on, frequency of updates, references by others, references of others, times content is retweeted, urls cited and a number of other variables. Sterne (2010) suggests using WeFollow.com to find people who use topic specific #hashtags on Twitter. The people who tweet the most about a topic are likely to be influencers in that others looking for tweets around a topic are likely to read them. In a wider web context, Sterne (2010) suggests using Technorati to identify bloggers who have clout and influence around a certain topic.

This type of research can be viewed as a fundamental component to sentiment analysis; social media marketing companies like Razorfish often package the two together. (Lord & Singh, 2010)

Population /Demographic studies.

Population studies involve defining the demographic characteristics of a community. In a population study, the goal is also to define the limits and size of the community that is being studied. Because of the complexity in defining the boundaries of a population and in sampling the whole of it, this type of research is rarely done in terms of social media.

Daugherty and Kammeyer (1995) define a population study as the assembling “of numerical data on the sizes of populations.” This sort of data is defined by the authors as “descriptive demographic statistics.” Daugherty and Kammeyer (1995) say “population numbers are always changing, so even if they are accurate when gathered they are soon out of date and inaccurate.” Daugherty and Kammeyer (1995) say the basic purpose of conducting a demographic study “is to explain or predict changes or variations in the population variables or characteristics.” Given the definition of a population study, the methodology involves counting all members of a select population.

The most famous example of a population study is the census. In the United States, this is done every ten years. According to the U.S. Census Bureau (n.d), the goal of the 2010 US census is ” to count all U.S. residents—citizens and non-citizens alike.” This is done by sending all citizens a ten-question questionnaire, requiring that people complete it by law and having a census taker follow up for all households did not return completed questionnaires. (U.S. Census Bureau, n.d.) The results are then calculated and are used by the government to make decisions.

This type of research often stands on its own. The results will often be utilized for marketing purposes in conducting other research, such as psychographics, to make that that sampling contains representative populations.

Online target analysis of behavior and psychographics.

Online targeting of and marketing towards a specific audience because of their demographic characteristics is extremely common on the Internet. Psychographics is a term that includes targeting towards a specific demographic group except it includes the offline component.

Sutherland and Canwell (2004) define psychographics as “market research and market segmentation technique used to measure lifestyles and to develop lifestyle classifications.” (p. 247) Nicolas (2009) defines online behaviorial analysis as a series of steps: Collecting user data across several sites, organizing information about users based on the sites they visit and their behavior on those sites, “infer demographics and interest data”, and classifying new users based on the collected data in order to deliver relevant ads and content based their demographic profiles. Kinney, McDaniel, and DeGaris (2008) define psychographics as attitude towards something such as a brand or involvement with an organization.

Given the methodology involved, much of this type of research involves action research in that it is done in a specific content, based on internal models to address specific situations.

An example of this type of research was done by Kinney, McDaniel, and DeGaris (2008) who investigated the demographic characteristics of NASCAR fans and their attitudes towards NASCAR, its sponsors and sponsor involvement with NASCAR. The research found that age, gender and education were all important variables in determining sponsor recall: Younger, more educated males had the best brand recall amongst NASCAR fans.

This type of research can be viewed as a subcomponent of a population study in that demographic information is sought about the population. In an online context, it often works in conjunction with search and traffic analytics analysis, content analysis, and interaction and collaboration analysis.

Predictive analysis.

A search on 13 July 2010 on SPORTDiscus had three results for “predictive analysis.” A search on the same date on Scopus had 605 results, 275 of which were in engineering, 132 in computer science and 102 in medicine. Predictive analysis is probably one of the least used analysis methods, especially in social media and fandom.

What is predictive analysis? At its simplest, it is identifying a future event or events, monitoring selection actions that precede the event and seeing if those events can be used to predict the outcome of similar events in the future. If a predictive value is found, an organization can monitor behaviors to help make more informed decisions.

An example of this type of research is “Predicting the Future With Social Media” by Asur and Huberman (2010). Their goal was to determine if tweet volume and sentiment on Twitter prior to a movie being released could be used to predict how well a movie performs at the box office. Their methodology involved identifying movie wider release dates that took place on a Friday, creating a list of keyword searches related to those movies, and using the Twitter API to collect all tweets and aggregate date that mention those keywords over a three month time period. The authors then compared the tweet volume to box office performance. They concluded that social media “can be used to build a powerful model for predicting movie box-office revenue.” (Asur & Huberman, 2010)

This type of research can be used in conjunction with other methods. It can be used along side a population study to see if certain actions will result in demographic changes.

Rational for Population Study

The literature review provides insight into the lack general quantitative analysis regarding the demographic and geographic characteristics of Australian sport fans in general and AFL fans in specific. Much of what is written involves observations based on match attendance, attendance statistics, common historical tropes based on the experience of the authors as members of the sport community or analysis based on demographic data around the community for which a club was based. The methodology rarely is spelled out. There is little reason to doubt the demographic composition of fans because most accounts match very well and there are a variety of citations that refer to a wide variety of sources. The literature review demonstrates a lack of quantitative research in terms of population characteristics.

The research regarding fan demographics in the Australian sport online community is even sparser. The focus on research being done tends to focus on fan production, such as the transition from fanzines to online mailing lists. It is often not quantitative in nature.

Given the hole in the research, there is a clear need to fill it to better understand the existing population of AFL fans who are increasingly using the Internet in order to facilitate their love of their chosen club. To do this, an appropriate methodology needs to be chosen. The previous section examined the major methodological approaches available for conducting research into social media and online populations. Most of these methods involve some form of interaction analysis or textual analysis. They do not offer a clear method of understanding the characteristics of a large group and its subcomponents.

Methodological Approach

The methodology used in this study will be a population study. To provide context for the findings, other methods will be utilized. The exact method for conducting the population study will differ depending on the site being examined. Therefore, most of the methodology used in this study will appear inside specific chapters.

Throughout this study, there is a dependence on user listed locations to determine the geographic location of members of the Australian sport fan community. To provide consistency across all sites looked at, a list was developed that included user generated location, city, state and country. The city, state and country were determined based on intelligent guesswork. For example, as the focus of the research is Australia, if Melbourne was standing alone, the assumption was that the user meant Melbourne, Victoria, Australia and not Melbourne, Florida, United States. Spelling variations and nicknames were also used to determine location. For example, Brisvegas is a nickname for Brisbane, Queensland, Australia. Often patterns of cities were looked for assuming the standard pattern of city, state, country or country, state, city or city, country or city, state. To aid in processing location lists more quickly, when using an automated tool such as the one for Twitter, the user-generated list was supplemented with one created by the author. This list included all Australian cities listed using the patterns of postal code and city, state and city, country, and city, state, country. The completed list contains over 65,000 variants that were listed by the author or user created.

References

Ankeny, J. (2009). HOW TWITTER IS REVOLUTIONIZING BUSINESS. Entrepreneur, 37(12), 26-32. Retrieved from Business Source Premier database.

Asur, S., & Huberman, B. A. (2010). Predicting the Future With Social Media. Social Computing Lab. Retrieved from http://www.hpl.hp.com/research/scl/papers/socialmedia/socialmedia.pdf

Bronwyn, B., Dawson, P., Devine, K., Hannum, C., Hill, S., Leydens, J., Matuskevich, D. Traver, C, and Palmquist, M. (2005). Case Studies. Writing@CSU. Colorado State University Department of English. Retrieved August 26, 2010 from http://writing.colostate.edu/guides/research/casestudy/.

Daugherty, H. G., & Kammeyer, K. C. W. (1995). An introduction to population. New York: Guilford Press.

Fang, W. (2007), Using Google Analytics for improving library website content and design: A case study. Library Philosophy and Practice 2007 (June), LPP Special Issue on Libraries and Google. Retrieved September 2, 2010 from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.5924&rep=rep1&type=pdf

Jansen, B. J. (2009). Understanding user-Web interactions via Web analytics. San Rafael, Calif.: Morgan & Claypool Publishers.

Jerz, D. (2002, November 11). Usability Testing: What is it? Dennis G. Jerz; Seton Hill University. Retrieved October 4, 2010, from http://jerz.setonhill.edu/design/usability/intro.htm

Kaushik, A. (2010). Web analytics 2.0: The art of online accountability & science of customer centricity. Hoboken, N.J: Wiley.

Kian, E., Mondello, M., & Vincent, J. (2009). ESPN—The Women’s Sports Network? A Content Analysis of Internet Coverage of March Madness. Journal of Broadcasting & Electronic Media, 53(3), 477-495. doi:10.1080/08838150903102519.

Kinney, L., McDaniel, S., & DeGaris, L. (2008). Demographic and psychographic variables predicting NASCAR sponsor brand recall. International Journal of Sports Marketing & Sponsorship, 9(3), 169-179. Retrieved from SPORTDiscus with Full Text database.

Krippendorff, K. (2007). Content analysis: An introduction to its methodology. Thousand Oaks, Calif.

Lord, B., & Singh, S. (2010). Fluent: The Razorfish Social Influence Marketing Report. Razorfish. Retrieved August 25, 2010, from http://fluent.razorfish.com/publication/?m=6540&l=1

Nicolas, P. (2009, December 17). “Online audience behavior analysis and targeting.” Patrick Nicolas Official Home Page. Retrieved August 1, 2010, from http://www.pnexpert.com/Analytics.html

Peterson, E., & Katz, J. (2010). Twitalyzer Help and Company Information | Twitalyzer: Serious Analytics for Social Media and Social CRM. Twitalyzer. Retrieved August 25, 2010, from http://www.twitalyzer.com/help.asp

Plimsoll, S. (2010). Find and target customers in the social media maze. Marketing (00253650), 10-11. Retrieved from Business Source Premier database.

Ramos, A., & Cota, S. (2009). Search engine marketing. New York: McGraw-Hill.

Software Services, Dale Carnegie & Associates, Inc., & Shedletsky, L. (2000, September 7). Interaction Analysis. University of Southern Main Communications Department. Retrieved October 5, 2010, from http://www.usm.maine.edu/com/chapter8/

Sterne, J. (2010). Social media metrics: How to measure and optimize your marketing investment. Hoboken, N.J: John Wiley.

Sturgil, A., Pierce, R., & Wang, Y. (2010). Online News Websites: How Much Content Do Young Adults Want?. Journal of Magazine & New Media Research, 11(2), 1-18. Retrieved from Communication & Mass Media Complete database.

Sutherland, J., & Canwell, D. (204). Key Concepts in Marketing. Palgrave Key Concepts. Hampshire, England: Palgrave MacMillan.

Sweeney, S., Dorey, E., & MacLellan, A. (2006). 3G marketing on the internet: Third generation internet marketing strategies for online success. Gulf Breeze, FL: Maximum Press.

Viégas, F. B., Wattenberg, M., Kriss, J., & van Ham, F. (2007). Talk Before You Type: Coordination in Wikipedia. Proceedings of the 40th Hawaii International Conference on System Sciences. Big Island, Hawaii. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.6907&rep=rep1&type=pdf

Vincenzini, A. (2010, July 14). A Case Study: The NBA’s Social Media Strategy & Tactics. Retrieved August 26, 2010, from http://www.slideshare.net/AdamVincenzini/the-nba-and-social-media-a-case-study

U.S. Census Bureau. (n.d.). How We Count America – 2010 Census. U.S. Census Bureau. Retrieved August 28, 2010, from http://2010.census.gov/2010census/how/how-we-count.php

Wanner, F., Rohrdantz, C., Mansmann, F., Oelke, D. and Keim, D., Visual Sentiment Analysis of RSS News Feeds Featuring the US Presidential Election in 2008. in Workshop on Visual Interfaces to the Social and Semantic Web, (2009).

Weber, L. (2009). Sticks and Stones: How Digital Business Reputations Are Created over Time and Lost in a Click. John Wiley & Sons Inc.

Related Posts:

Twitter: A Solution to the Follow Spammers

Posted by Laura on Tuesday, 5 October, 2010

I’m having another period of annoyance with Twitter. I really feel like I should probably turn off alerts for followers again because right now? I’m pretty much putting people on a spammer list if they have 2,000 people they already follow. I’m also sending out cranky DMs blasting people for doing this sort of following.

For the past two months, I’ve spent a lot of time looking at Twitter. I’ve looked at follower counts. I’ve looked at follower geographic patterns. I’ve looked at people’s descriptions. I’ve looked at people’s geographic locations. The point of this is often to determine the geographic location of Australian sport fandom. I’ve read a fair bit on technology blogs about Twitter to help further my own understanding of Twitter to help me with intended mini-literature review in my Twitter chapter of my dissertation. I’ve basically been ODing on Twitter. There is a lot of interesting stuff out there.

But as a user? I’m getting pretty cranky. Seriously cranky. Every day, it feels like I’m getting 2 to 10 follows (across about 3 different accounts) from people who I don’t know, who are not geographically close to where I’m writing about, who don’t appear interested in professional sports, who have low interaction rates, who have 2,000+ people they follow. As part of my research, I constantly ask: What is the ROI for a team on Twitter in terms of where their audience is located? How can they best leverage their network? What can they provide for their fans to induce them to follow them? How can their fans help them? As a user, I can’t see how the people like I describe who follow me gain any benefit from that. (They can’t read me. I can barely keep up with 350. I function more or less because Americans get neglected as they post while I sleep.) (In one case, I got followed and unfollowed by about 5 times by the same user with 4,000 followers. ) As a user, I can’t see what they offer me. They rarely bother to explain.

And this is killing my desire to stay on Twitter. Seriously killing my desire to stay on Twitter. I just can’t. There are people I want and need to keep track of on Twitter for professional reasons. (The personal ones are almost exclusively on Facebook these days. On that level, I don’t feel the need to stay.) If you’re not active on Twitter and you cover social media, people sometimes doubt your legitimacy because you’re not using the product you’re discussing.

What I’d really like is for Twitter to make the following reforms:

1. Add a field for follow philosophy. It can be selecting from a list. It can be freeform writing. This way, when people follow others, they can see if they have a mutual philosophy. “I follow back people everyone.” “I follow friends, family and professional acquaintances.” “I follow celebrities.” “I follow only people with less than 1,000 followers.”
2. Allow people to block people with certain follow totals unless you follow them first. (I want to block anyone with 1,500 people they follow from following me first. If you want to follow me, interact with me first. Otherwise, add me to a list.) This way, spam following by power users is cut back.

The two following methods would help to kill off the Twitter spam following (and yes, your unwanted e-mail notification that you followed me to never read me is spam. It is unwanted and unsolicited and you didn’t indicate any mutual interest.) and help prevent my own fatigue. I use and prefer Facebook more than Twitter precisely because I’m not inundated with unwanted announcements like that.

Related Posts: