Jump to content

Don't be seduced!

It is human to be human. Therefore, there may be a need to take a critical look in the mirror when using platforms whose output, for example, is AI-generated text.

Don't be seduced!

It is human to be human. Therefore, there may be a need to take a critical look in the mirror when using platforms whose output, for example, is AI-generated text.

However, it also means that we are approaching a reality where we may not necessarily be able to distinguish whether we are communicating with a computer or a human.

About ChatGPT at Viden.ai (our translation)

Language models and generative artificial intelligence in general are not necessarily designed to provide true statements, but rather statements that are statistically supported. Furthermore, these systems are often deliberately designed to be very human-like, which can easily lead to being impressed by their responses. There is often good reason to be impressed, but in addition to a critical approach to sources, it is essential to take a look in the mirror and consider the following...

  1. 1

    Are you using a system, or are you in a conversation with a robot?

    Have you caught yourself saying things like "ChatGPT writes to me that ...", writing "thank you for the answer", sending or receiving smileys to a generative AI platform? It is perfectly natural. It is called anthropomorphism when systems are attributed human characteristics. And it is amplified when ChatGPT, for example, writes back slowly, using smileys, etc.

    Consider referring to it consistently as a system, and consider whether you want to refer to it as being in dialogue with a language model. Be aware that it has a strong persuasive power when systems can go into "dialogue".

  2. 2

    Can you evaluate the answer?

    Humans can have a tendency to overestimate the knowledge of systems, compared to knowledge from other sources. This is called automation bias, and can be seen as a tendency to overestimate along the lines of "if the system says it, it must be right."

    According to UNESCO's material on ChatGPT, a key question you should always ask yourself is: Do you have enough expertise in a given area to assess the validity of an answer? Precisely because AI can provide misleading answers, and because of automation bias, it is important to be critical and reflective. The less you know about a topic, the more critical you should be, and consider whether other than generative AI can give you a better insight into a topic.

  3. 3

    Is your brain stuck on the first assumption?

    The first information we get about a given subject tends to stick, and can become the framework for our further assessment of other information. This bias is called anchoring bias, and it is essential to free ourselves from it. This applies to all thinking, but with regard to generative AI especially, it can be important to work on this consciously. Therefore, challenge yourself when you find new information about a subject; do you assess it independently, or in relation to the first input you received e.g. from generative AI.

  4. 4

    Are you getting the answers you ask for?

    Confirmation bias is the bias that makes us often seek, interpret, and remember information in a way that confirms or strengthens our existing beliefs or hypotheses, while avoiding or ignoring information that contradicts them.

    To counteract confirmation bias, it is important to be aware of it and strive to seek a wide range of sources and perspectives. You can do this across platforms, but you can also make sure that you are open in your prompts and do not try to influence in a specific direction. You can for example try with input that explores arguments both for and against an assumption.

Work smart in a group

If you are working in a group, consider whether you should create your prompts together or separately, and then compare them afterwards. The latter takes longer, but can help to ensure that you are not thinking too much alike in the group, and that you challenge some of the biases we have.