It's not an opinion if a bot wrote it
Reputation is no longer built. It is protected.
Once upon a time, it took people to spark a protest. Now, all it takes is an account.
An Instagram page, a hashtag on X, a screenshot on TikTok. Hundreds — sometimes thousands — of substantially identical comments posted by faceless users with no followers and no history. All convinced. All synchronised. All against someone. A modern representation of public outrage.
But if the wave is manufactured, is it still a scandal? And if it's not a crowd shouting scandal, but a script?
Indignation on command
Indignation has become a strategic lever: some activate it for a fee, others cultivate it as a marketing tool, and still others suffer it without even knowing why.
We are talking about systematic shitstorms (I am sure that even the less attentive among you will have noticed the etymology here): attacks orchestrated with bots or fake profiles, generated or managed by artificial intelligence to target people, brands, parties, investors, funds, media. Anyone with visibility.
This is not hacking; it is not even always disinformation. It is more subtle: it is the distortion of public opinion. And often it takes very little: an AI that writes credible texts, an operator who decides on the tone, and a platform ready to spread them.
And when it happens in an already inflammatory context — see the Ferragni case in Italy — there is no way to distinguish between spontaneous and orchestrated reactions.
If it makes noise, it works
Platforms claim to be neutral, but only when it suits them. They moderate, amplify, and hide. They reward what generates engagement, even when it does not generate value. And we, users educated in continuous interaction, are no longer trained to ask ourselves who wrote what. It is enough for us to know that it made noise.
So bots create polarisation, and polarisation creates attention. Which can then be monetised: with clicks, visibility and sales.
Who suffers from all this? Companies, which no longer have the tools to manage a real reputational crisis from an artificial one; people, who find themselves caught up in waves of pre-packaged hatred; and finally, collective trust, which crumbles comment after comment.
Who is responsible if it is a bot speaking?
Current legislation is not equipped to distinguish between legitimate opinion and a campaign orchestrated by software. However, some legal provisions already exist and can be invoked under certain conditions:
defamation (article 595 of the Italian Criminal Code): if the content is offensive and attributable to a natural person, even through bots, it may constitute a criminal offence;
market manipulation (article 185 of the Consolidated Law on Finance): a reputational attack may constitute market manipulation if it has an impact on a financial security;
unfair competition (article 2598 of the Italian Civil Code): if the aim is to discredit a competitor through automation, this may also be relevant;
Platform responsibility: the Digital Services Act only imposes ex post removal obligations. No real prevention.
The real issue? Automated actions. Who actually wrote that comment? The prompt, the user, the algorithm?
Today, responsibility evaporates. The damage, however, remains.
Defending an identity, not the truth
If every consensus can be constructed and every discredit can be planned, reputation is no longer the sum of actions, but a narrative to be defended.
And companies are realising this. Today's communicators are not only concerned with the truth, but also with the resistance of their image to manipulation. When in doubt, communicate first, take a stand first, deny first — even if there is nothing to deny yet.
Not because you have something to hide, but because your reaction time is slower than the speed of an automated attack.
They who make the most noise win
In a world where you can buy likes, build outrage, generate witnesses and simulate accusations, reputation is no longer built. It is protected.
And artificial intelligence, once again, has only pushed the problem further: the point is no longer whether something is true, but whether it can seem true enough to become true.
We no longer need to know who is right. It is enough to know who made the most noise.
But amid all this noise, are we still able to recognise a genuine voice?