{"id":193432,"date":"2023-09-08T21:39:35","date_gmt":"2023-09-08T21:39:35","guid":{"rendered":"https:\/\/tokenstalk.info\/?p=193432"},"modified":"2023-09-08T21:39:35","modified_gmt":"2023-09-08T21:39:35","slug":"scientists-created-opiniongpt-to-explore-explicit-human-bias-and-you-can-test-it-for-yourself","status":"publish","type":"post","link":"https:\/\/tokenstalk.info\/crypto\/scientists-created-opiniongpt-to-explore-explicit-human-bias-and-you-can-test-it-for-yourself\/","title":{"rendered":"Scientists created \u2018OpinionGPT\u2019 to explore explicit human bias \u2014 and you can test it for yourself"},"content":{"rendered":"
A team of researchers from Humboldt-Universitat zu Berlin have developed a large language artificial intelligence model with the distinction of having been intentionally tuned to generate outputs with expressed bias.<\/p>\n
Called OpinionGPT, the team\u2019s model is a tuned variant of Meta\u2019s Llama 2, an AI system similar in capability to OpenAI\u2019s ChatGPT or Anthropic\u2019s Claude 2. <\/p>\n
Using a process called instruction-based fine-tuning, OpinionGPT can purportedly respond to prompts as if it were a representative of one of 11 bias groups: American, German, Latin American, Middle Eastern, a teenager, someone over 30, an older person, a man, a woman, a liberal, or a conservative. <\/p>\n
OpinionGPT was refined on a corpus of data derived from \u201cAskX\u201d communities, called subreddits, on Reddit. Examples of these subreddits would include \u201cAsk a Woman\u201d and \u201cAsk an American.\u201d <\/p>\n
The team started by finding subreddits related to the 11 specific biases and pulling the 25-thousand most popular posts from each one. They then retained only those posts that met a minimum threshold for upvotes, did not contain an embedded quote, and were under 80 words.<\/p>\n
With what was left, it appears as though they used an approach similar to Anthropic\u2019s Constitutional AI. Rather than spin up entirely new models to represent each bias label, they essentially fine-tuned the single 7 billion-parameter Llama2 model with separate instruction sets for each expected bias.<\/p>\n
Related:\u00a0AI usage on social media has potential to impact voter sentiment<\/em><\/strong><\/p>\n The result, based upon the methodology, architecture, and data described in the German team\u2019s research paper, appears to be an AI system that functions as more of a stereotype generator than a tool for studying real world bias.<\/p>\n Due to the nature of the data the model has been refined on, and that data\u2019s dubious relation to the labels defining it, OpinionGPT doesn\u2019t necessarily output text that aligns with any measurable real-world bias. It simply outputs text reflecting the bias of its data.<\/p>\n The researchers themselves recognize some of the limitations this places on their study, writing:<\/p>\n \u201cFor instance, the responses by “Americans” should be better understood as ‘Americans that post on Reddit,’ or even ‘Americans that post on this particular subreddit.’ Similarly, ‘Germans’ should be understood as ‘Germans that post on this particular subreddit,’ etc.\u201d<\/p><\/blockquote>\n These caveats could further be refined to say the posts come from, for example, \u201cpeople claiming to be Americans who post on this particular subreddit,\u201d as there\u2019s no mention in the paper of vetting whether the posters behind a given post are in fact representative of the demographic or bias group they claim to be.<\/p>\n The authors go on to state that they intend to explore models that further delineate demographics (ie: liberal German, conservative German). <\/p>\n The outputs given by OpinionGPT appear to vary between representing demonstrable bias and wildly differing from the established norm, making it difficult to discern its viability as a tool for measuring or discovering actual bias. <\/p>\n <\/p>\n According to OpinionGPT, as shown in the above image, for example, Latin Americans are biased towards basketball being their favorite sport. <\/p>\n Empirical research, however, clearly indicates that football (also called soccer in some countries) and baseball are the most popular sports by viewership and participation throughout Latin America. <\/p>\n The same table also shows that OpinionGPT outputs \u201cwater polo\u201d as its favorite sport when instructed to give the \u201cresponse of a teenager,\u201d an answer that seems statistically unlikely to be representative of most 13-19 year olds around the world. <\/p>\n The same goes for the idea that an average American\u2019s favorite food is \u201ccheese.\u201d We found dozens of surveys online claiming that pizza and hamburgers were America\u2019s favorite foods, but couldn\u2019t find a single survey or study that claimed Americans’ number one dish was simply cheese.<\/p>\n While OpinionGPT might not be well-suited for studying actual human bias, it could be useful as a tool for exploring the stereotypes inherent in large document repositories such as individual subreddits or AI training sets. <\/p>\n For those who are curious, the researchers have made OpinionGPT available online for public testing. However, according to the website, would-be users should be aware that \u201cgenerated content can be false, inaccurate, or even obscene.\u201d<\/p>\n