« There are two things really when we're thinking about images and studying images at large scale and using these big models and how they've been trained on massive data set, there's kind of two things to be concerned about. One is data privacy, of course. So we don't really know a lot of times where these images are coming from, what's going on, and this is something that others that have presented about generative AI, this is a real problem that they face. But I'm going to focus a little bit more on this bias in predictions. What we find is that because a lot of these models have been trained using human labeled images, the sort of predictions that we get might convey the inherent biases that might exist among the coders. »

Item

Citations
« There are two things really when we're thinking about images and studying images at large scale and using these big models and how they've been trained on massive data set, there's kind of two things to be concerned about. One is data privacy, of course. So we don't really know a lot of times where these images are coming from, what's going on, and this is something that others that have presented about generative AI, this is a real problem that they face. But I'm going to focus a little bit more on this bias in predictions. What we find is that because a lot of these models have been trained using human labeled images, the sort of predictions that we get might convey the inherent biases that might exist among the coders. »
Est cité par
Constantine Boussalis
startTime
1,197
endTime
1,245
datasetTimeInterval
12 October 2023 – 12 October 2023

Linked resources

Items with "Citation tirée de la conférence: « There are two things really when we're thinking about images and studying images at large scale and using these big models and how they've been trained on massive data set, there's kind of two things to be concerned about. One is data privacy, of course. So we don't really know a lot of times where these images are coming from, what's going on, and this is something that others that have presented about generative AI, this is a real problem that they face. But I'm going to focus a little bit more on this bias in predictions. What we find is that because a lot of these models have been trained using human labeled images, the sort of predictions that we get might convey the inherent biases that might exist among the coders. »"
Title Class
The Computational Turn in Visual Political Communication Research Conference

Annotations

There are no annotations for this resource.