金融翻訳者の日記/A Translator's Ledger

自営業者として独立して十数年の翻訳者が綴る日々の活動記録と雑感。

Generative AI Cannot Be Left to Its Own Devices

In an experiment using ChatHub to have six different AI generators read the same passage and extract what they deemed the 'most important sentence' based on a given prompt, it was observed that they often chose different sentences (although there were certainly overlaps). Similarly, when using ChatGPT, responses can vary depending on the time of use. Why does this happen?

The reason why responses to the same prompt may differ based on the timing of the inquiry or the type of generative AI used is not solely because the AI 'makes a mistake' (though that can happen). It's hypothesized, based on past experiences, that it's due to the AI's current state (type, timing, learning conditions, etc.) leading to different interpretations of values at that moment.

There are two main strategies to mitigate this and align the AI's responses more closely with the intended direction:

1.Make the prompt's instructions as detailed as possible, especially regarding the perspectives that should be considered in the response.
2.Adjust the AI's responses to better match one's own sensibilities and viewpoints.
Both strategies are necessary, but the latter is especially important due to the inherent limitations of the former.

That is, while explicit adjustments can be made when the outcome is clearly off-target from what was intended in the prompt, there are often instances where the response feels subtly different from what was sought. This 'difference in sensibility' could be attributed to perspectives deeply ingrained through various experiences but might not always be consciously recognized. These perspectives are essentially 'values.'

Fully manifesting these 'values' in a prompt beforehand is impractical and endless. Therefore, from the user's perspective, the approach involves either focusing on specific points or broadly covering the values to be addressed in the prompt without overextending. After executing the prompt, the next step is to utilize one's knowledge and reasoning (conscious) and sensibilities (usually subconscious) to adjust the AI's response post-delivery. This is crucial to avoid deviation from the targeted outcome. In this sense, a human's final check is indispensable.

This hypothesis emerged while summarizing various texts with ChatHub."