Researchers from the National Research Council Canada performed experiments on four large vision-language models (LVLM) to see if they displayed racial and gender bias. AI models are trained on massive amounts of data that inherently reflect the biases of the societies from which the data is collected. In the absence of complete data, humans generalize,…
The post Does AI display racial and gender bias when evaluating images? appeared first on DailyAI.