Award Abstract # 1915790
EAGER: SaTC: Early-Stage Interdisciplinary Collaboration: Fair and Accurate Information Quality Assessment Algorithm

NSF Org: SES
Divn Of Social and Economic Sciences
Recipient: RUTGERS, THE STATE UNIVERSITY
Initial Amendment Date: May 28, 2019
Latest Amendment Date: July 10, 2019
Award Number: 1915790
Award Instrument: Standard Grant
Program Manager: Sara Kiesler
skiesler@nsf.gov
 (703)292-8643
SES
 Divn Of Social and Economic Sciences
SBE
 Direct For Social, Behav & Economic Scie
Start Date: June 1, 2019
End Date: May 31, 2023 (Estimated)
Total Intended Award Amount: $299,946.00
Total Awarded Amount to Date: $315,946.00
Funds Obligated to Date: FY 2019 = $315,946.00
History of Investigator:
  • Vivek Singh (Principal Investigator)
    vivek.k.singh@rutgers.edu
  • Lauren Feldman Rogers (Co-Principal Investigator)
Recipient Sponsored Research Office: Rutgers University New Brunswick
3 RUTGERS PLZ
NEW BRUNSWICK
NJ  US  08901-8559
(848)932-0150
Sponsor Congressional District: 12
Primary Place of Performance: Rutgers University
4 Huntington Street
New Brunswick
NJ  US  08901-1071
Primary Place of Performance
Congressional District:
06
Unique Entity Identifier (UEI): M1LVPE5GLSD9
Parent UEI:
NSF Program(s): Secure &Trustworthy Cyberspace
Primary Program Source: 01001920DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 025Z, 065Z, 114Z, 7434, 7916, 9178, 9251
Program Element Code(s): 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.075

ABSTRACT

Prevalence of poor-quality information in cyberspaces poses threats to civic society. To increase information quality, multiple automated algorithms for undertaking quality assessment of online information have been proposed. However, the fairness and performance of these algorithms across political and policy opinions has been challenged, undermining trust in such systems. Through a unique early-stage interdisciplinary collaboration that brings together experts from the fields of information science, computer science, communication, political science, and journalism, this project will develop accurate and fair information quality assessment algorithms, while also gleaning deeper insight into the nature of information being utilized across the ideological spectrum. The proposed research advances the science of information and will offer insights to organizations that aim to undertake automated information quality assessment, ultimately allowing for the creation of safer and trustworthy cyberspaces.

The proposed project will include: (1) the creation of a large article dataset that has been robustly labeled for both quality and political ideological alignment, (2) an audit of multiple existing information quality assessment algorithms to assess their accuracy and fairness, (3) a systematic post-hoc inductive analysis of the content mislabeled by these algorithms, and (4) modification of existing algorithms to support fairer and more accurate information quality assessment. These four phases will build upon each other, leveraging the contributions of each discipline and will provide a new interdisciplinary model for SaTC-related research. This project will provide interdisciplinary training to graduate students, mentoring them in diverse methods and laying the groundwork for long-term interdisciplinary research. This project also will help broaden participation in data science professions.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Almuzaini, Abdulaziz A. and Bhatt, Chidansh A. and Pennock, David M. and Singh, Vivek K. "ABCinML: Anticipatory Bias Correction in Machine Learning Applications" In 2022 ACM Conference on Fairness, Accountability, and Transparency , 2022 https://doi.org/10.1145/3531146.3533211 Citation Details
Singh, Vivek K. and Andre, Elisabeth and Boll, Susanne and Hildebrandt, Mireille and Shamma, David A. "Legal and Ethical Challenges in Multimedia Research" IEEE MultiMedia , v.27 , 2020 https://doi.org/10.1109/MMUL.2020.2994823 Citation Details
Roy, Jacob and Bhatt, Chidansh and Chayko, Mary and Singh, Vivek K. "Gendered Sounds in Household Devices: Results from an Online Search Case Study" Proceedings of the Association for Information Science and Technology , v.58 , 2021 https://doi.org/10.1002/pra2.576 Citation Details
Park, Jinkyung and Ellezhuthil, Rahul Dev and Isaac, Joseph and Mergerson, Christoph and Feldman, Lauren and Singh, Vivek "Misinformation Detection Algorithms and Fairness across Political Ideologies: The Impact of Article Level Labeling" Proceedings of the 15th ACM Web Science Conference 2023 , 2023 https://doi.org/10.1145/3578503.3583617 Citation Details
Alasadi, Jamal and Arunachalam, Ramanathan and Atrey, Pradeep K. and Singh, Vivek K. "A Fairness-Aware Fusion Framework for Multimodal Cyberbullying Detection" 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM) , 2020 https://doi.org/10.1109/BigMM50055.2020.00032 Citation Details
Singh, Vivek K. and Ghosh, Isha and Sonagara, Darshan "Detecting fake news stories via multimodal analysis" Journal of the Association for Information Science and Technology , v.72 , 2021 https://doi.org/10.1002/asi.24359 Citation Details
Park, J. and Ellezhuthil, R. and Arunachalam, R. and Feldman, L. and Singh, V. "Toward Fairness in Misinformation Detection Algorithms" Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media , v.16 , 2022 https://doi.org/10.36190/2022.54 Citation Details
Almuzaini, Abdulaziz A. and Singh, Vivek K. "Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models" In Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia , 2020 https://doi.org/10.1145/3422841.3423536 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This project aimed to create accurate and fair misinformation detection algorithms that can identify false or misleading information in online political news articles. Misinformation detection is a challenging task that requires not only technical skills but also ethical, social, and journalistic skills. Hence, this project yielded an interdisciplinary framework involving experts from information science, computer science, journalism, and communication to develop novel methods for creating fair and accurate misinformation detection algorithms. 

The project had the following main objectives:

  • To study existing misinformation detection approaches for the existence of bias based on political leaning or other factors.

  • To understand the mechanics and rationale behind the results found and the potential implications for public trust and democratic discourse.

  • To identify ways to create fairer algorithms that can account for different perspectives and values.

The project carried out the following major activities:

  • Experimented with multiple algorithms for testing the accuracy and fairness of machine learning algorithms for misinformation detection.

  • Developed reliable coding procedures for manually labeling news articles for veracity and political leaning. This included training graduate students in coding procedures. One thousand (1000) news articles have been labeled following journalistic practices and shared as a public resource.

  • Compared and contrasted the article labels for veracity and political leaning as identified by the journalistic labeling process versus source based attribution.

  • Developed new methods for countering bias in misinformation detection algorithms using the above journalistically labeled dataset.

  • Extended and validated above-mentioned ideas for algorithmic bias reduction in multiple domains e.g., misinformation detection, news sentiment detection, and cyberbullying detection.

The project achieved the following outcomes:

  • The project contributed to the intellectual merit of the field by advancing the state-of-the-art in misinformation detection and algorithmic fairness. The project produced novel insights into the sources and effects of bias in misinformation detection and proposed methods to mitigate it. The project also developed a new dataset of news articles labeled for veracity and political leaning that can serve as a valuable resource for future research in computational journalism and fairness in machine learning literature. 

  • The project also had broader impacts on society by raising awareness of the importance and challenges of misinformation detection and algorithmic fairness. The project engaged with various stakeholders such as journalists, researchers, and the general public through publications, presentations, workshops, and media outreach. The project also trained graduate students in interdisciplinary skills and fostered collaboration among researchers from different disciplines and institutions.

The project demonstrated that misinformation detection and algorithmic fairness are complex and interrelated issues that require careful consideration and collaboration. The project also showed that it is possible to create more accurate and fair algorithms that can help combat misinformation and promote informed citizenship.

The project disseminated its findings through various publications, presentations, workshops, and media outreach. Some of the notable ones are:

  • One article on fairness in misinformation detection published in the proceedings of ACM Web Science Conference, 2023.

  • One article on fairness in misinformation detection published in the proceedings of ICWSM- MEDIATE, 2022.

  • One journal article on misinformation detection algorithms published in the Journal of Association for Information Science and Technology, 2020.

  • Six additional conference proceedings articles and one journal resulted from the related work around fairness audit and bias reduction in related information-processing domains.

  • Results from the work shared in multiple invited talks and panels at NSF-, NIST- and ACM- sponsored events.

The articles have been shared in the NSF Public Access Repository and the codebook and the labeled dataset on misinformation and political leaning are available at https://osf.io/qwnsf/

Overall, this project has created new approaches for creating fair and accurate misinformation detection algorithms that can support improved well-being of individuals in society and improved national security. This research has been contributing practical strategies for detecting and countering misinformation online.

 


Last Modified: 08/21/2023
Modified by: Vivek K Singh

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page