About Inflection Point International
For this study, 23 researchers conducted interviews with the leaders of 201 media organizations in 12 countries.
100 interviews were carried out in Latin America:
- 25 in Argentina
- 25 in Brazil
- 25 in Colombia
- 25 in Mexico
52 of 60 planned interviews were carried out in Asia:
- 14 in the Philippines
- 8 in Malaysia
- 15 in Indonesia
- 15 in Thailand
49 of 60 planned interviews were carried out in Africa:
- 14 in Ghana
- 11 in Kenya
- 14 in Nigeria
- 10 in South Africa
Our original research plan included Myanmar, but the military coup there in early 2021 led us to replace this country with Thailand.
How digital natives were selected
Our regional managers and researchers worked together to draw up initial media lists for each country based on the same selection criteria that SembraMedia uses for its media directory. The proposed media lists were then reviewed by our partner funders—Luminate and CIMA—as well regional allies, including Splice Media in Southeast Asia, SAMIP in Africa, and SembraMedia’s team of country ambassadors in Latin America.
How data was collected
Our regional managers and researchers worked together to draw up initial media lists for each country based on the same selection criteria that SembraMedia uses for its media directory.
The proposed media lists were then reviewed by our partner funders—Luminate and CIMA. Regional allies, including Splice Media in Southeast Asia and SembraMedia’s team of country ambassadors in Latin America, were also consulted.
How data was proceded and analyzed
A team of three analysts processed the data and developed the findings and insights included in this report. Their biographies are included in the Who we are page with the rest of the team that worked on this project.
The analysts spent several weeks exploring, normalizing, and anonymizing the data, as well as translating it into English and checking currency conversion rates. They also defined missing metrics and went back to researchers with questions when the data was incomplete or warranted further exploration.
Data was processed using Python, and notebooks were uploaded to Github for team collaboration. Once anonymized and prepared, data was uploaded into Google Spreadsheets for easier calculations, pivot tables, and general comparisons. More complex analysis was done in Python and uploaded to the team’s private repository.
The team took a multiple step approach toward analysis that included a primary exploratory analysis and a hypothesis validation step. First, they collected questions from the research team and tested them against the data available. In the cases where data pointed us towards a significant finding, we followed up with a hypothesis verification test.
For that purpose, we used a statistical inference test called the Mann-Whitney (or Wilcoxon-Mann-Whitney). The Mann-Whitney test is used as an alternative to a T-test when the data are not normally distributed. It establishes the confidence level of a certain hypothesis. The significance level, also denoted as alpha or α, is the probability of rejecting the null hypothesis when it is true. For example, a significance level of 0.05 indicates a 5% risk of concluding that a difference exists when there is no actual difference. We established a 0.05 level of significance following academic research standards.
We could sample different numbers of elements, but we set a lower limit of seven items per group. For example, when we wanted to compare the impact on revenue of the media that generate a certain type of content, vs those that do not, in both subsets we made sure that there were at least seven media represented in the data.
We also conducted other types of analysis to maximize findings: clustering analysis, marginal contribution analysis (MCA) for certain target variables, and language analysis for open questions. Clustering analysis was made using four different techniques: Kmeans, DBScan, Spectral Clustering, and Agglomeratie clustering. For the MCA, first we analyzed if there was a relationship between different variables and then, once the relationship was established, we analyzed how much it contributed marginally. For language analysis we built word clouds and tried to identify patterns.
To provide points of comparison and benchmarks, we also included data from other research projects. For example, we compared some of our findings with open datasets from the World Bank, RSF’s Press Freedom Index, and UNESCO’s observatory of journalists around the world.
The digital native media organizations included in this study
Our primary goal in producing this report is to help digital media leaders better understand their challenges and opportunities, but we could not have done any of this without the participation of the leaders of all of the media listed here that were interviewed for this report.