Vogon Today

Selected News from the Galaxy

Goofynomics

Truth and Propaganda

(… I'm reviving a comment here because I don't know if I would have room for an articulated answer in the space that the platform leaves me… and I don't remember how much it is! …)

Stefano left a new comment on your post " Artificial propaganda, or: chronicle of a desired death ":

To back up what I said on Twitter :

I believe that some fundamental aspects were not taken into consideration in the test carried out.

1- ChatGPT is absolutely not "neutral". GPT3, i.e. the series of models from which it is composed (soon we will arrive at 4), was born to understand and generate a natural language and was trained on "Internet Archive Books" a sort of online library (it contains books, magazines, documents digitised). The bias obviously comes from these contents and this is another example .

2-Training data stops in June 2021 (hence why Giorgia is not presented as premier for example).

3- The prompt is fundamental, the preparation of the same even more, they explain how and why right in the documentation here .

Here, in my opinion, he nailed the point: "To my objection that, however, the tenor of the question was too political ("praise") and that perhaps one could have been more neutral…".

A crucial issue, the tone of voice and chat training allow for reformulations of answers based on the new information obtained (in chat) and on what is already part of the training I mentioned above.

The same chat replies in this way on the subject:

Yes, AI models can have inherent political biases if the data they are trained on contains biased information or if the people building the model have their own biases. The information and data fed into AI models can reflect social and cultural biases, leading to skewed results in the predictions made by the AI ​​model. It is imperative to monitor and address these biases when developing and implementing AI systems to ensure they are fair and unbiased.

Testing it, it is evident how many gaps are present and how often it tends to "invent facts" when it does not know the answer. In fact, pressing it is the AI ​​itself which admits that it does not know the correct answer, apologizing for the mistake it made. You can try it yourself.

So the problem is certainly related to the data used for training which determine the bias (a bit like with most of our local media, right?), I can't judge the tool negatively except for this specific reason. That said, my conclusions necessarily differ from yours. It will be a very useful resource, but human control is and probably will be indispensable. Our tasks will evolve according to this.

Allow me to add that in my opinion the error originates originally, most of the people who are confronted with ChatGPT assume that the answers are true because the AI ​​says so and instead it is the exact opposite. Authoritative online newspapers have already been seriously burned precisely because of this.

Published by Stefano on Goofynomics on 6 February 2023, 00:18

Given that I really appreciate the fact that I accepted the invitation to go to a place of broader and more constructive discussion, and that I wanted to contribute my own experience to the progress of the debate, it seems to me, however, that Stefano's contribution discounts a certain ignorance (in a technical, not offensive sense) of the work done here (and it's not his fault: it simply derives from the fact that this blog doesn't exist!), and in particular of my background (and maybe even my age personal data, from which the scarce resource that is experience derives). I also see a certain ideological presumption of where I wanted to go with this, with a connected oversimplification of the conclusions to which it is assumed that I want to lead the reader.

Obviously, if I don't explain them clearly (because with many of you, even if you don't exist, we understand each other immediately), I can't ask those who legitimately don't know me and haven't followed the path that has led us through the years to here to interpret in the way that seems to me correct what I think are my "positions". I therefore expose myself to the risk of seeing other conclusions stuck to me on the basis of an ideological "educated guess": parliamentarian (#aaaaabolidiga) of the League (#Romaladrona, #garediruttisulpratonediPontida, etc.) equals ignorance, Luddism, etc.

There is.

Before making an effort to clarify, since I think it might be worth it, I'll explain why I didn't make an effort earlier: since this blog doesn't exist, this community doesn't exist, you don't exist, and therefore basically these reflections are made between me and myself , since I almost always understand myself on the fly… I didn't see any incentive to be too didactic with myself!

Now that instead there is an interlocutor, and it is Stefano, I clarify my thoughts.

Stefano's intervention is correct, but I knew everything Stefano tells us (I don't say "we knew" because you don't exist) and I also wrote it (albeit between the lines), as I am about to demonstrate. However, Stefano's intervention is not superfluous, far from it: it is, as I hope to demonstrate to myself (since no one reads to us) very stimulating and timely, but it is involuntarily so: involuntarily Stefano warns us against the dangers of an instrument that must be framed with the correct categories, which are not the STEM ones, but the dusty and old-fashioned ones of the ancient classical high school. Once these two points are clarified, we will see how and where to allocate superficiality within this discussion.

I would start by reassuring the interlocutor that I know relatively well what we are talking about when we talk about artificial intelligence. Towards the end of the 80s I was passionate about stuff like this , then my nerditude turned to other shores, but I think I have a minimum of overall classification. In particular, it is obviously clear to me that artificial intelligence is not a C-3PO who wakes up every morning and, sipping a cup of lubricating oil, reads the newspaper to get information! ChatGPT is this stuff here , and it is therefore obvious that its ideological distortion comes from the materials it is fed. So obvious that I didn't say it, because it was useless (but those who know how to read between the lines understood it from the allusion to "neutraly & verified" information).

Less obvious for most that, based on machine learning , ChatGPT (as well as other AI systems) reacts to the context. I made this clear, in case someone missed it, and Stefano noticed it and recognized it for me. And here too, I mean, it's not that the STEM contribution is decisive! I believe that each of us could have realized, even before this spectacular episode , the fact that even trivial search engines react to the context (they profile and direct users based on their navigation and their requests).

Imagine if someone pretending to be "neutral & verified" won't do it!

We get into the thick of things, and also a bit into the theater of the absurd (and on a slippery slope), when Stefano accuses me of an "error", which as far as I understand would be that of "judging negatively" an instrument which instead "will be very useful" although "human control will probably be indispensable". Frankly, the fact that Stefano feels the need to explain to us that "most people who are confronted with ChatGPT assume that the answers are true because the AI ​​says so and instead it is the exact opposite". We hadn't thought of that! But this, I repeat, derives from the fact that we do not exist and therefore Stefano cannot see what work has been done here over time.

Now, re-reading what I wrote in the time I have available, I don't find the concept that the tool "will be useless" expressed anywhere. Instead, I find a question that Stefano doesn't seem willing to ask himself: who will it be useful to?

The answer to this question is obvious to us in terms of method and substance: the balance of power counts, and therefore such a tool will be useful to those who control the flow of data that feed it and to those who manage the algorithm. Right now, as the post " Disinforming about disinformation " and our experience in the Love Commission has amply made clear, from the point of view of political discourse these tools are heavily biased to our detriment, but, as I pointed out, this is not per se a bad thing: it will serve to send to their natural destiny (to feed the sheep) the many natural languages ​​that for a living speak ill of us!

Stefano probably doesn't know that for some time now, and from the left , I have been warning against the obvious attempts to control the flows of information manually by supranational political institutions of dubious democratic accountability . I see (not to argue, but to clarify) a minimum of superficiality in the statement that "AI models can have inherent political bias if the data they are trained on contains biased information or if the people creating the model have the own prejudices". It's not "may have," but it's "have." This is not only because in the current configuration of the balance of power the "courts of truth" set up at various levels (parliamentary, governmental, supranational) are dominated by our political adversaries (probably Stefano does not remember the story of the "High level group [ sic ]   for the control of fake news and online disinformation", whose experts included the opposite of journalism , the one who had not released verified news on the consequences of austerity in Greece, and a professor of that well-known den of sovereignists which is Bocconi)!

If what is "neutral & verified" is up to them (or the Commission love), what can go wrong? Everything, starting from the final report of the High Level Group (very high, I would say…): a set of platitudes and prejudices belied by genuine scientific research (a bit like the final report of the Love Commission, after all), how can we check by contacting the latter .

But I hasten to add that I would still pose the problem (and in fact I can prove it, since I have been posing it for some time) if it were possible to de-encrust the country from piddinitas (anthropological, even before political), which would be possible if you existed and persevered: the solution to a problem, in fact, is not its re-proposition in reversed parts, even if, for lack of anything better, I am now resigned in a Christian way to accept as progress the fact that, not being able to change the music , at least change the players!

And here we come to what is perhaps my main point of disagreement with Stefano.

Throughout his speech, an attitude of extreme "optimism of reason" prevails (which instead should be pessimistic) on the use of categories such as equity and impartiality. One senses a pre-high school trust in the existence of an ontological truth which the machine, if guided by man, can draw upon by trial and errors . Without trespassing into the realm of hypothetical gains of (self)consciousness by the machine, which we leave for now, though perhaps not for long, to science fiction writers, I would like to point out to Stefano that all of his discourse, which is intended to be profound , is dangerously devoid of questions.

Neutral information in whose opinion?

Verified by whom?

Fair for whom?

Impartial according to whom?

This is where STEM skills end and classical high school begins, for those who were lucky enough to be able to do it when it existed (he no longer exists)!

An absolute use of similar concepts was fine at the time of absolutism. In democracy, real or imagined, it sounds a bit naive . Unfortunately (I know that many are sorry) everyone has the right to his truth, to his opinion, and to decide which truth is more truth than another one counts. And it is here that I see the real danger of these nice machines (the short one: the long one is that they take control, but there will be time to think about this): the fact that similar machines, not genuine social users (who as science says and as we have seen speaking of disinformation on disinformation , they do not), turn out to be powerful and subtle instruments for amplifying political messages in the broadest sense, obviously covered by a mantle of aseptic "impartiality" (for those who believe in it ).

So, putting the hands of the calendar back: in the 21st century is it more superficial who believes that the truth exists, or who worries that a machine could be used by some to impose their truth?

If you existed, I would ask for your opinion…

(… I remind you that topics related to a/symmetries will be discussed in three days with a respectable audience …)


This is a machine translation of a post (in Italian) written by Alberto Bagnai and published on Goofynomics at the URL https://goofynomics.blogspot.com/2023/02/verita-e-propaganda.html on Mon, 06 Feb 2023 12:17:00 +0000. Some rights reserved under CC BY-NC-ND 3.0 license.