Lot of us seem to be studying how new technologies like AI impact the way we live and work, and new methods to aid the scientific knowledge construction process. But what about the way we do research? Has anyone thought about how automated and AI-mediated content generation would affect the scientific method of discovery or the scientific knowledge construction process?
How do we construct knowledge when the portion of data is created by automated agents? How does it affect the knowledge construction process? What constitutes valid knowledge in this context? How do we differentiate between human-generated data vs AI-generated data? Do we need to differentiate it at all?
When we do research, seeing is believing — we see some patterns in the data, we believe these patterns do exists, and we move on to further validating these patterns. However, with the AI algorithms, and automated content generation/modification, seeing doesn’t mean you can believe it. Because AI tools can be used to modify reality in different ways. Consider this example, you see an image or a video of a person addressing a crowd. But that event has not happened in the material world. It is generated by an automated agent (or an algorithm). However, this image or video is included in your data analysis. Suppose, now you have many of these images, videos, and the textual context in your data. You draw insights and conclusions from these data.
In my mind, this has implications for the scientific knowledge construction process, particularly in social sciences. The inferences we make from the data may be flawed. How do we validate the knowledge constructed from such data? Going forward, thinking about the years ahead….thinking about the proliferation of AI agents….we need to start thinking about how we deal with this now.
Note: This post is written to spark a conversation rather than give an accurate picture of this problem or solution.