NSF Org: |
SES Divn Of Social and Economic Sciences |
Recipient: |
|
Initial Amendment Date: | May 28, 2019 |
Latest Amendment Date: | July 10, 2019 |
Award Number: | 1915790 |
Award Instrument: | Standard Grant |
Program Manager: |
Sara Kiesler
skiesler@nsf.gov (703)292-8643 SES Divn Of Social and Economic Sciences SBE Direct For Social, Behav & Economic Scie |
Start Date: | June 1, 2019 |
End Date: | May 31, 2023 (Estimated) |
Total Intended Award Amount: | $299,946.00 |
Total Awarded Amount to Date: | $315,946.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
3 RUTGERS PLZ NEW BRUNSWICK NJ US 08901-8559 (848)932-0150 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
4 Huntington Street New Brunswick NJ US 08901-1071 |
Primary Place of Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | Secure &Trustworthy Cyberspace |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.075 |
ABSTRACT
Prevalence of poor-quality information in cyberspaces poses threats to civic society. To increase information quality, multiple automated algorithms for undertaking quality assessment of online information have been proposed. However, the fairness and performance of these algorithms across political and policy opinions has been challenged, undermining trust in such systems. Through a unique early-stage interdisciplinary collaboration that brings together experts from the fields of information science, computer science, communication, political science, and journalism, this project will develop accurate and fair information quality assessment algorithms, while also gleaning deeper insight into the nature of information being utilized across the ideological spectrum. The proposed research advances the science of information and will offer insights to organizations that aim to undertake automated information quality assessment, ultimately allowing for the creation of safer and trustworthy cyberspaces.
The proposed project will include: (1) the creation of a large article dataset that has been robustly labeled for both quality and political ideological alignment, (2) an audit of multiple existing information quality assessment algorithms to assess their accuracy and fairness, (3) a systematic post-hoc inductive analysis of the content mislabeled by these algorithms, and (4) modification of existing algorithms to support fairer and more accurate information quality assessment. These four phases will build upon each other, leveraging the contributions of each discipline and will provide a new interdisciplinary model for SaTC-related research. This project will provide interdisciplinary training to graduate students, mentoring them in diverse methods and laying the groundwork for long-term interdisciplinary research. This project also will help broaden participation in data science professions.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
This project aimed to create accurate and fair misinformation detection algorithms that can identify false or misleading information in online political news articles. Misinformation detection is a challenging task that requires not only technical skills but also ethical, social, and journalistic skills. Hence, this project yielded an interdisciplinary framework involving experts from information science, computer science, journalism, and communication to develop novel methods for creating fair and accurate misinformation detection algorithms.
The project had the following main objectives:
-
To study existing misinformation detection approaches for the existence of bias based on political leaning or other factors.
-
To understand the mechanics and rationale behind the results found and the potential implications for public trust and democratic discourse.
-
To identify ways to create fairer algorithms that can account for different perspectives and values.
The project carried out the following major activities:
-
Experimented with multiple algorithms for testing the accuracy and fairness of machine learning algorithms for misinformation detection.
-
Developed reliable coding procedures for manually labeling news articles for veracity and political leaning. This included training graduate students in coding procedures. One thousand (1000) news articles have been labeled following journalistic practices and shared as a public resource.
-
Compared and contrasted the article labels for veracity and political leaning as identified by the journalistic labeling process versus source based attribution.
-
Developed new methods for countering bias in misinformation detection algorithms using the above journalistically labeled dataset.
-
Extended and validated above-mentioned ideas for algorithmic bias reduction in multiple domains e.g., misinformation detection, news sentiment detection, and cyberbullying detection.
The project achieved the following outcomes:
-
The project contributed to the intellectual merit of the field by advancing the state-of-the-art in misinformation detection and algorithmic fairness. The project produced novel insights into the sources and effects of bias in misinformation detection and proposed methods to mitigate it. The project also developed a new dataset of news articles labeled for veracity and political leaning that can serve as a valuable resource for future research in computational journalism and fairness in machine learning literature.
-
The project also had broader impacts on society by raising awareness of the importance and challenges of misinformation detection and algorithmic fairness. The project engaged with various stakeholders such as journalists, researchers, and the general public through publications, presentations, workshops, and media outreach. The project also trained graduate students in interdisciplinary skills and fostered collaboration among researchers from different disciplines and institutions.
The project demonstrated that misinformation detection and algorithmic fairness are complex and interrelated issues that require careful consideration and collaboration. The project also showed that it is possible to create more accurate and fair algorithms that can help combat misinformation and promote informed citizenship.
The project disseminated its findings through various publications, presentations, workshops, and media outreach. Some of the notable ones are:
-
One article on fairness in misinformation detection published in the proceedings of ACM Web Science Conference, 2023.
-
One article on fairness in misinformation detection published in the proceedings of ICWSM- MEDIATE, 2022.
-
One journal article on misinformation detection algorithms published in the Journal of Association for Information Science and Technology, 2020.
-
Six additional conference proceedings articles and one journal resulted from the related work around fairness audit and bias reduction in related information-processing domains.
-
Results from the work shared in multiple invited talks and panels at NSF-, NIST- and ACM- sponsored events.
The articles have been shared in the NSF Public Access Repository and the codebook and the labeled dataset on misinformation and political leaning are available at https://osf.io/qwnsf/
Overall, this project has created new approaches for creating fair and accurate misinformation detection algorithms that can support improved well-being of individuals in society and improved national security. This research has been contributing practical strategies for detecting and countering misinformation online.
Last Modified: 08/21/2023
Modified by: Vivek K Singh
Please report errors in award information by writing to: awardsearch@nsf.gov.