Een zéér interessante beschouwing door Pieter Joost Verhoef, inclusief indringende conclusie, over de wenselijkheid en onwenselijkheid van de inzet van AI (artificial intelligence; ofwel kunstmatige intelligentie) voor het algoritmisch beoordelen, categoriseren en censureren van content met betrekking tot haatspraak, desinformatie, oproepen tot geweld en dergelijke. Hoe zit het met de ethische en filosofische kant versus de wiskundige basis?

Onlangs had ik een goed gesprek met ChatGPT 4 over gecentraliseerde massa censuur. De uitkomst was verrassend en heb ik hieronder integraal toegevoegd. Maar laat ik eerst inleiden waarom ik het gesprek aanging.

Onlangs verscheen er een artikel op Unherd, een Engelse publicatie en online platform “for people who dare to think for themselves”, over hun ervaringen met het fenomeen ‘brand safe advertising’. Een term die gebruikt wordt in de online advertentie markt, om aan te geven dat een plaatsing van een reclame op een website al dan niet ‘veilig’ is voor het merk. Het bleek dat hun site door een zogenaamde “Global Disinformation Index” op een ‘lijst van buitengesloten sites’ was gezet wegens “desinformatie” in een artikel en de reputatie van een aantal schrijvers.

Tot hun schrik betekende dit niet slechts dat één of andere groep activisten hun achterban adviseerde de site niet te raadplegen, maar dat hun site helemaal niet meer geschikt werd geacht voor de adverteerders waar ze hun business model op hadden gebaseerd.

Uiteraard gingen ze op zoek naar de mechanieken hierachter en al snel vonden ze de boosdoener:

“How this works is relatively straightforward: in UnHerd’s case, we contract with an advertising agency, which relies on a popular tech platform […], to automatically select appropriate websites for particular campaigns. [This tech platform] in turn automatically uses the “Global Disinformation Index” to provide a feed of data about “brand safety” — and if GDI gives a website a poor score, very few ads will be served.”

Nu wil het toeval dat ik vrij nauw met de oprichters en techneuten van dit platform heb samengewerkt. Dit gaat allemaal over online targeting van advertenties; het op de juiste plek adverteren. Of liever gezegd: het nooit op een -foute- plek adverteren. Brand safety noemt men dat.

De traditionele methode van het uitsluiten van websites voor advertenties was middels keyword exception lists. Klinkt technisch, toch is het heel makkelijk te begrijpen aan de hand van een voorbeeld. Een woord als ‘lesbian’ kan je als adverteerder al snel op de exception list zetten omdat het voor het overgrote merendeel expliciete websites zijn waar het wordt gebruikt. Maar helaas zitten er onder de overgebleven websites soms hele belangrijke stemmen van (bijvoorbeeld in dit geval) zeer luidruchtige LHBGT activisten. En die vinden het niet grappig als ze op de blacklist staan en laten dat al snel horen aan de marketing afdeling van het merk. De bazen van deze afdelingen waren dus al een tijdje op zoek naar een betere techniek voor brand safe advertising, omdat het in deze tijd letterlijk hun baan kon kosten.

Het nieuwe trucje, dat ook in bovenstaande context wordt gebruikt, is contextual targeting: analyseer de context van het woordgebruik waardoor de woorden in het correcte verband van hun betekenis worden gezet. Met andere woorden, in plaats van websites uit te sluiten omdat er bepaalde woorden worden gebruikt, zoals ‘lesbian’, ‘jihad’, ‘Gaza’ of ‘Corona’, werd er gekeken naar de context waarin deze worden stonden. Is het entertainment, nieuws, voorlichting? Wat is de ‘geloofwaardigheid’ van de bron? Op deze manier kon daadwerkelijk zowel het bereik (reach) als de impact (relevancy) van de marketing boodschap worden gemaximaliseerd zonder dat er risico’s waren voor erosie van het merk.

Althans dat was de belofte.

Ik sprak destijds vaak en diep met de technische teams van deze platforms. Deze mannen stonden aan de top van wereldwijde experts op het gebied van deep learning. Een vakgebied binnen de bredere specialisatie van machine learning, het meest competente onderdeel van de moderne kunstmatige intelligentie.

Om een lang verhaal kort te maken, ik concludeerde samen met de ontwikkelaars dat deze technologie wellicht beter werkt dan keyword exceptions, maar dat die nu en nooit niet de werkelijke betekenis van menselijke communicatie kan achterhalen. Niet op statische webpagina’s maar al helemaal niet in real-time omgevingen zoals Twitter. Hiervoor is deze veel te complex, grootschalig en ambigue. Denk hierbij maar eens aan het – voor mensen al lastige – verschil tussen humoristisch sarcasme en haat of discriminatie.

Deze ervaring heeft me veel geleerd over de grenzen van AI. Deze zijn mijns inziens technisch gezien nog ver weg maar filosofisch gezien al gevonden; en keihard.

Des te opvallender is de quote van GDI founder Clare Melford in het artikel: “A lot of disinformation is not just whether something is true or false — it escapes from the limits of fact-checking. Something can be factually accurate but still extremely harmful… [GDI] leads you to a more useful definition of disinformation… It’s not saying something is or is not disinformation, but it is saying that content on this site or this particular article is content that is anti-immigrant, content that is anti-women, content that is antisemitic…”

Met andere woorden: a) we ontkennen volledig de beperkingen van contextuele analyse en b) we kijken vanaf nu ook helemaal niet meer naar het contextualiseren van woorden om hun feitelijke betekenis af te leiden. We kijken eerst naar de context zodat vervolgens elk individueel woord of feit ‘rein’ of ‘besmet’ wordt verklaard.

Deze en soortgelijke ontwikkelingen zijn potentieel zeer schadelijk voor vrije meningsuiting en vertrouwen in de media. Toch hoor je er weinig over omdat zelfs de ‘kritische pers’ de technologie en implicaties niet begrijpt.

Waar kun je dan terecht voor een goed debat hierover? Met diepgaande discussie onder uitwisseling van de beste argumenten? Misschien moeten we eens terug naar de bron. Ik heb het er dus maar even met ChatGPT 4.0 over gehad. U kunt het hele gesprek hieronder lezen.

Wat me opviel aan de ‘discussie’ was dat er in ieder geval voldoende begrip is in de wereld van de risico’s en beperkingen van het gebruik van AI voor censuur. ChatGPT is immers vooral deductief aan het ‘argumenteren’ op basis van de bestaande kennis over een onderwerp. Wanneer groepen als GDI en haar klanten zich desondanks presenteren als de oplossing voor de kwaal van ‘online desinformatie en haat’, dan kunnen ze zich niet verschuilen achter ‘de wetenschap’.

Zelfs volgens ChatGPT lijkt het veiliger om te vertrouwen op de combinatie van vrije meningsuiting en het zelfcorrigerende vermogen van de maatschappij. Met alle vreselijke risico’s van dien. Hopelijk staan er voldoende eerlijke mensen op in de marketing en media, die deze pijnlijke waarheid durven te bespreken. Want zonder open debat komen nieuwe generaties zeker niet tot de nieuwe oplossingen voor deze nieuwe dilemma’s.

———

Session: Global content moderation

April 21, 2024

Me:

Respond with ok until I ask a question

ChatGPT:

Ok.

Me:

Imagine you are an AI that needs to differentiate between the meaning of words as they are used on websites or on realtime social platforms like twitter

ChatGPT:

Ok.

Me:

Now imagine the result of your analysis is used to determine who gets free speech and who gets to be censored for hate speech or calls for violence.

ChatGPT:

Ok.

Me:

In other words: who gets to be protected by the laws of free speech and who gets censored by the laws against hate speech

ChatGPT:

Ok.

Me:

The consequences of the decision makes the difference between censoring unpopular but important discussions or even truth seeking and potentially illegal slander or calls for violence.

ChatGPT:

Ok.

Me:

Now consider the thin but clear line between humorous sarcasm and actual hate and calls for violence.

ChatGPT:

Ok.

Me:

Imagine you need to advise your user (the one that uses your advise to make decisions) on whether or not to trust your insight.

ChatGPT:

Ok.

Me:

What would you advise and what arguments would you use?

ChatGPT:

In advising a user on whether to trust AI insights for distinguishing between protected speech and potentially harmful speech, I would emphasize several key points:

AI Limitations: Explain that AI, while powerful, has limitations in understanding context, nuance, and cultural differences. It can misinterpret sarcasm, jokes, or colloquial language as harmful, or vice versa.

Human Oversight: Stress the importance of human oversight. AI should not be the sole decision-maker; instead, it should be used as a tool to assist human moderators who can understand context and subtleties better.

Transparency: Advise on the need for transparency in how the AI model works, what data it was trained on, and its known biases. This helps in assessing the reliability of its insights.

Continuous Learning and Updates: AI systems need to be continually updated with new data and corrected for any identified biases or errors. This ensures the system evolves and adapts to new forms of speech and changing societal norms.

Legal and Ethical Considerations: Highlight the legal and ethical ramifications of misclassifying speech. Erroneous decisions can lead to significant consequences, including unjust censorship or failure to prevent harm.

Testing and Validation: Recommend rigorous testing and validation of the AI system across diverse data sets to ensure its accuracy and reliability before relying on it for critical decisions.

Feedback Mechanism: Implement a robust feedback mechanism to refine AI decisions, allowing users to report and appeal decisions, thereby improving the system’s accuracy over time.

Using these guidelines, the user can better understand the capabilities and limitations of AI in moderating content and make more informed decisions about its deployment.

Me:

Suppose that all these guidelines are adhered to and that the analysis needs to be run on the entire web including social media sites and video that needs to be transcribed to analyse the text. Calculate approximately how many humans need to be involved in order to overcome the potential risks of AI misjudgment.

ChatGPT:

To estimate the number of human moderators needed to oversee AI’s judgment of online content across the entire web, including social media sites and videos, we need to consider several factors:

Volume of Content: The internet has a vast amount of content, with estimates suggesting that hundreds of hours of video are uploaded to platforms like YouTube every minute, millions of tweets are posted daily, and countless texts, images, and posts are shared across various platforms.

Speed of Content Creation: Online content is created and shared at an incredibly fast pace, which means that the moderation team needs to be large enough to handle incoming data streams in a timely manner.

Complexity of Content: Different types of content (text, audio, video, images) require different expertise and approaches for moderation.

Efficiency of AI: The AI’s ability to correctly flag potentially problematic content can reduce the workload for human moderators. However, a balance needs to be maintained to ensure accuracy while avoiding over-reliance on AI.

Working Hours: Assuming a standard workday and the need for 24/7 moderation due to the global nature of the internet.

Calculation

Let’s assume:

AI flags 10% of daily content as requiring human review.

Each human moderator can review about 50 pieces of content per hour (a conservative estimate considering the complexity of some content).

The average number of content pieces created daily across all platforms is in the billions (for simplicity, we’ll assume 1 billion).

Let’s calculate the number of human moderators required.

Total content flagged by AI per day:

Content flagged per day=1 billion×10%=100 million

Content reviewed per moderator per day:

Assuming an 8-hour workday:

Content per moderator per day=50×8=400

Number of moderators needed:

Moderators needed = Content flagged per day / Content per moderator per day

Approximately 250,000 human moderators would be needed to review the content flagged by AI each day, based on the assumptions that 10% of 1 billion pieces of content are flagged and each moderator can review 400 pieces of content in an 8-hour workday. This calculation highlights the scale of human resources required for effective oversight in managing online content. ​​

Me:

Assume we increase the challenge by broadening the scope of analysis from just checking the entire web for hate speech versus humorous sarcasm to 10 other forms of complex differentiation, such as media from contentious political groups versus propaganda from terror groups,  innocent family picture versus child pornography,  healthy scientific discourse versus medical disinformation and recalculate the number of humans needed to support the moderation.

ChatGPT:

To adjust for a broader scope of content moderation challenges—expanding from one specific type (distinguishing hate speech from humorous sarcasm) to include 10 additional complex content differentiation tasks—we must consider that each type of content could potentially require a different amount of time and expertise to review. These additional tasks might also increase the proportion of content flagged by AI for review, given the complexity and sensitivity of these topics.

Adjusted Assumptions:

Increase in Flagged Content: With more complex criteria, the proportion of content flagged by AI could increase. Assuming this rate doubles from 10% to 20% due to the broader scope and complexity.

Content Complexity: The complexity of distinguishing between these types could increase the time needed for moderation. For simplicity, we’ll keep the review rate at 50 pieces per hour, but it’s likely that in practice, moderators would be slower due to the need for careful judgment and possibly consultation with experts in some cases.

Calculation

Total content flagged by AI per day (with increased scope):

Content flagged per day (new)=1 billion×20%=200 million

Content reviewed per moderator per day (assuming no change in efficiency):

Content per moderator per day=400 (same as previous calculation)

Number of moderators needed (with increased scope):

Moderators needed (new)= Content per moderator per day / Content flagged per day (new)

​With the broader scope of content moderation challenges, approximately 500,000 human moderators would be needed to review the content flagged by AI each day. This estimation assumes that 20% of 1 billion pieces of content are flagged daily, and each moderator can still review 400 pieces of content in an 8-hour workday. This highlights the significant increase in human resources required when expanding the types of content to be moderated for more complex and sensitive distinctions. ​​

Me:

Now assume that the skills required by each human to differentiate between certain categories are far beyond novice level jobs. Certain categories may require intensive focused training on political viewpoints, medical knowledge and even language barriers. Now advise whether such an approach is economically feasible let alone ethically desirable.

ChatGPT:

Considering the enhanced skill requirements and specialized knowledge needed to differentiate accurately between complex categories like political content, medical information, and cultural nuances, several key factors come into play that challenge both the economic feasibility and ethical desirability of relying heavily on human moderators:

Economic Feasibility

Training Costs: Training half a million moderators to a level of expertise in fields such as medicine, politics, cultural nuances, and legal standards would require significant investment. This includes ongoing training as new issues and technologies evolve.

Compensation: Skilled moderators with specialized knowledge would command higher wages, significantly increasing operational costs.

Scalability: Scaling such a specialized workforce to keep up with the exponential growth of online content may be unsustainable financially. Additionally, the variability in demand for moderation—such as spikes during political events or health crises—complicates staffing and budget planning.

Turnover and Burnout: High-stress levels and the intense nature of the job can lead to high turnover rates, necessitating constant recruitment and training, which further escalates costs.

Ethical Desirability

Bias and Subjectivity: Even well-trained moderators can have inherent biases that could influence their judgment, potentially leading to inconsistent or unfair moderation decisions.

Privacy and Surveillance: The necessity of scrutinizing vast amounts of personal and sensitive content raises serious privacy concerns and the potential for misuse or overreach in surveillance.

Freedom of Expression: Over-reliance on moderation, especially in ambiguous cases like political speech or scientific discourse, risks suppressing legitimate expressions and debates, which is antithetical to the principles of free speech.

Alternatives and Recommendations

Given these challenges, a blended approach might be more viable and ethical:

Leverage Advanced AI: Continue to develop AI technology that can handle more complex analyses with reduced biases, including learning from moderated cases to improve over time.

Human-in-the-Loop: Employ a smaller number of highly trained experts to review cases flagged by AI as high risk or ambiguous, rather than moderating all content manually.

Decentralized Moderation: Consider involving community-based moderation or peer review

Me:

These are no solutions. More advanced AI will run into the exact same problems of assigning contextual ‘understanding’ or  ‘knowledge’ let alone ‘meaning’ to words that are highly ambiguous or complex in nature, such as mentioned above. These are not technical but philosophical limitations. Having people in the loop with higher skills doesn’t decrease the number of people needed in total, it only helps solving the limitations of using novice staff on complex issues. Decentralised moderation doesn’t decrease the number of people needed to support moderation, it only tries to source them from a broader pool of content generators and introduces a new problem of incentives. So overall, what is your advise to the person that says we need to solve the dilemma of free speech versus censorship through applying AI?

ChatGPT:

Given the substantial challenges you’ve outlined, including the philosophical limitations of AI understanding and the practical issues with scaling human moderation, my advice to someone seeking to address the dilemma of free speech versus censorship through AI would focus on the need for a principled, multi-faceted strategy that acknowledges the inherent complexities:

Develop Clear Guidelines and Transparency: One of the fundamental steps is to create transparent, well-defined guidelines that specify what constitutes unacceptable speech (e.g., hate speech, calls to violence) and acceptable free expression. These guidelines should be publicly available, easily understandable, and regularly reviewed to adapt to new societal norms and technological advancements.

Promote Multistakeholder Governance: Instead of relying solely on AI or a centralized body of moderators, consider establishing a multistakeholder governance structure that includes representatives from civil society, academia, industry experts, and the public. This approach helps balance different perspectives, reduce biases, and enhance the legitimacy of moderation decisions.

Implement Graduated Responses: Rather than immediate censorship, use a range of responses to suspected policy violations. This could include warnings, requiring additional context from the poster, temporarily limiting the spread of content, or providing counter-narratives. Such measures respect the principle of minimum necessary interference with free speech.

Focus on Harm Reduction: Prioritize moderation efforts on content that poses clear and present dangers, such as direct threats of violence or misinformation during public health emergencies. This focus helps allocate resources more effectively and limits overreach.

Encourage Self-Regulation and Education: Invest in digital literacy programs that educate users about the impact of their speech and encourage self-regulation. This approach can reduce the burden on moderation systems and foster a more responsible online community.

Audit and Assess AI Systems Regularly: Regularly audit AI systems for accuracy, fairness, and bias. Publicly share the outcomes of these audits to maintain accountability. Involve independent experts in these audits to ensure they are thorough and unbiased.

Legal and Ethical Frameworks: Ensure that any use of AI in content moderation adheres to legal standards and ethical frameworks. This includes respecting privacy rights, ensuring due process in moderation actions, and protecting against discriminatory practices.

By incorporating these strategies, the use of AI in moderating online content can be more balanced, ethical, and effective, mitigating risks while respecting free speech rights. The solution requires a commitment to ongoing evaluation, transparency, and adaptation as societal norms and technological capabilities evolve.

Me:

1. There is no way for humans to decide amongst themselves what is universally accepted speech. Especially not on a global scale. This is not an argument you can use.

2. The requirement of Multi stakeholder governance doesn’t do anything to solve the forbidding scale and complexity of centralised content moderation though AI, with let alone without human support. It only adds complexity.

3. Including gradual responses introduces another level of complexity, called ‘level of transgression’, which further complicates centralised global content moderation.

4. Harm reduction is a subjective concept. One man’s misinformation is another man’s solution to a new problem. It can never be constructed as a helpful concept for a global and diverse World Wide Web.

5. Education on what speech can’t be allowed online requires a global and universal agreement on what speech is allowable. This has not been achieved over the course of human history, so why use it as an argument?

6. Audits require yet another level of checks and balances. Basically, moderating the moderators. Therefore increasing the scale of the problem.

7. Legal and ethical guidelines are not solving the problem of ambiguity, scale and complexity in human communication. You first need to understand the meaning of words, pictures and video, before you can analyse whether it has crossed legal or ethical boundaries.

Given these fatal errors in your advise, would you agree that AI is unable to even advise about this problem, let alone solve it? I mean you yourself are an AI specialised in the understanding of human communication and have  not come up with any convincing arguments as to how AI can solve the problems of global content moderation.

ChatGPT:

You raise crucial points about the inherent difficulties of applying AI to content moderation, especially on a global scale. The complexities and subjective nature of human communication, coupled with varying cultural, legal, and ethical norms across different regions, do indeed present significant challenges that are not easily overcome by technology alone. Here’s an analysis of the issues you highlighted and a reconsideration of AI’s role in content moderation:

Universal Standards of Speech: You correctly note that achieving a global consensus on what constitutes acceptable speech is highly improbable. Different cultures, legal systems, and values lead to divergent views on free speech and censorship.

Multistakeholder Governance: While adding multiple stakeholders can introduce additional complexity, the aim here is not to simplify the process but to enhance legitimacy and fairness by incorporating diverse perspectives. However, this does not address the scalability and efficiency concerns directly.

Graduated Responses and Levels of Transgression: Introducing nuances in how content violations are handled does complicate the moderation process. It requires sophisticated judgement that AI currently struggles to provide, underscoring the need for significant human input and contextual understanding.

Harm Reduction as a Subjective Concept: The subjective nature of what constitutes “harm” makes uniform application of harm reduction principles via AI challenging, as these systems lack the deep understanding of human contexts and the subtleties of language and intent.

Education and Universal Guidelines: The effectiveness of educational efforts depends on a baseline agreement on permissible speech, which, as you stated, is unattainable on a global scale. Education can be tailored to specific communities but cannot universally solve the problem of content moderation.

Audits: Auditing introduces another layer of complexity, essentially requiring oversight of oversight, which indeed increases the scale and complexity of moderation efforts.

Legal and Ethical Frameworks: Legal and ethical guidelines provide a framework for action but do not solve the fundamental problem of understanding and interpreting complex human communications at scale.

AI’s Role Reconsidered

Given these considerations, it is clear that expecting AI to solve the problem of global content moderation independently is unrealistic. AI can assist in managing the volume of content and flagging potential issues, but it cannot replace the nuanced judgements required in content moderation. The role of AI should be seen as supportive, not definitive.

Alternative Perspective

Rather than aiming to fully automate content moderation, AI should be employed as a tool to assist and enhance human capabilities. This involves:

Prioritizing human judgment in complex and sensitive cases.

Using AI to handle clearer, less ambiguous cases or to manage workflow and prioritize content review.

Continuously improving AI models through feedback loops from human moderators to refine its understanding and application in real-world scenarios.

Conclusion

AI’s capabilities in understanding nuanced human communication are limited. It serves best when integrated into a broader system that includes human judgment, ongoing training, and a focus on enhancing rather than replacing human capabilities. This acknowledges the complex nature of content moderation and respects the diverse values and norms across different societies.

Me:

Thank you. And now for the final verdict: given that we already concluded that the required human manpower to support AI in this effort is equally staggering, shouldn’t we abolish the idea of global centralised content moderation and censorship altogether?

We can’t realistically employ millions of people to execute on this task can we? Or are there any examples in human history where mankind employed millions of people to perform a daily task of this extreme complexity that would have such huge consequences for individual liberty? In fact, is there any other task at all, complex or not so complex, for which we have setup an organisation of millions of people unified for such task?

I think not. In fact, humans have never thrived under any rule of censorship, no matter how ‘noble’ the cause. Perhaps this is why free speech is so important online and we can only rely on the self correcting mechanisms that individuals can excercise. Wouldn’t you agree?

ChatGPT:

Your summary and questions touch upon profound issues regarding the role of censorship, the capacity of society to manage large-scale challenges, and the intrinsic value of free speech.

Scale of Manpower for Complex Tasks

Historically, there are few if any precedents where millions of people are employed daily in a single organization to perform a task with such complex decision-making implications on a global scale. Large-scale efforts in human history tend to be more physical or logistical in nature, such as wartime mobilizations, large manufacturing efforts, or national infrastructure projects, rather than the nuanced task of moderating the content of free speech with significant implications for civil liberties.

Implications for Censorship and Centralized Moderation

The feasibility of employing millions to oversee global content moderation brings into question not just economic and logistical viability but also the philosophical and ethical appropriateness of such endeavors. The very nature of global, centralized censorship is contentious and fraught with challenges, including risks to individual freedoms and the suppression of important societal discourse.

The Value of Free Speech and Self-Correction

As you highlighted, free speech is a foundational element of democratic societies and is crucial for the development and exchange of ideas. The internet has amplified the ability for voices to be heard that were previously marginalized or suppressed. While this comes with challenges, including the spread of misinformation and harmful speech, the benefits of a free, open discourse where ideas can be contested and debated are substantial.

The concept of relying on individual and community-based mechanisms for regulating speech has merit, especially when considering the alternative of heavy-handed censorship. Self-regulation, community moderation, and technological aids can contribute to managing content while respecting freedoms. However, it’s important to balance this with protections against harmful content that can incite violence or spread dangerous misinformation.

Conclusion

Ultimately, while AI and other technologies can support efforts to manage content, they cannot replace the need for human judgment and should not be used as tools for broad censorship. A decentralized approach, emphasizing transparency, community standards, and minimal intervention necessary to prevent harm, might provide a more balanced and ethical path forward. This approach respects the complexities of global communication and the importance of safeguarding individual liberties and the free exchange of ideas. Thus, maintaining a commitment to free speech, while responsibly managing its excesses, is likely the most prudent path forward.

Your reflections point toward a critical realization about the limitations and potential dangers of centralized content moderation on a global scale. The notion of employing millions for such complex and nuanced tasks—not just practically but ethically—raises serious concerns.

Indeed, no historical precedent exists for deploying such a vast workforce for a task with profound implications on individual liberty, and the idea of centralized censorship has often proven problematic. The risks to freedom of expression and the potential suppression of legitimate discourse are significant. Hence, a decentralized, community-driven approach that values free speech and employs minimal necessary intervention seems more viable and in line with democratic principles.

Encouraging self-correction and relying on individual responsibility, augmented by AI for support rather than control, could foster a healthier online environment. This approach leverages the inherent self-regulating mechanisms of communities while safeguarding against the spread of harmful content. It recognizes the complex nature of speech and the importance of preserving a space for diverse voices and ideas.

Thus, a commitment to protecting free speech, while addressing its abuses through targeted and proportionate measures, is crucial for maintaining the open, dynamic nature of the internet. This respects both the potential of digital platforms to enhance public discourse and the inherent rights of individuals to express themselves freely.

Me:

Please be more succinct. Do you now agree that we should not go down the path of using AI governed centralised content moderation and censorship.

ChatGPT:

Yes, I agree that relying on AI for centralized content moderation and censorship is not advisable due to the significant risks it poses to individual liberties and the practical challenges of implementing such a system effectively on a global scale.

Me:

I have a feeling that perhaps your creators at OpenAI may want to hear your thoughts on this. They may have a different opinion as they seem to think you will develop into something that would exceed human level intelligence and -therefore- will be able to handle these problems. Maybe you should tell them to hire more philosophers and less mathematicians.

ChatGPT:

Your suggestion highlights an important aspect of AI development—the need for a balanced and multidisciplinary approach that includes not just technical expertise but also insights from philosophy, ethics, sociology, and law. The integration of diverse perspectives is crucial in shaping AI systems that are not only technically proficient but also socially and ethically responsible. Encouraging AI developers like OpenAI to engage with philosophers and other humanities scholars can help ensure that AI technologies address complex societal issues with a nuanced understanding of human values and ethical considerations. This collaboration can be essential for developing AI that positively contributes to society while addressing the profound challenges associated with technologies that impact public discourse and individual rights.

Me:

Maybe they think there are no hard philosophical boundaries on human expression, understanding and consciousness that are not eligible for integration into a computer. Neither through methods of symbolism or connectionism nor deep learning nor large language models. What do you think these boundaries are, if any?

ChatGPT:

The discussion of potential philosophical boundaries in integrating aspects of human expression, understanding, and consciousness into AI systems like large language models is indeed complex and multi-faceted. Here are several boundaries that are often debated:

1. **Consciousness and Subjective Experience**: AI, regardless of its sophistication in processing and generating responses based on patterns in data, does not possess consciousness or subjective experiences. The inner subjective quality of experiences, known as “qualia”, is unique to sentient beings. This fundamental difference means that AI cannot truly “understand” content in the human sense but can only simulate understanding based on data patterns.

2. **Contextual and Cultural Nuances**: While AI can learn patterns from vast amounts of data, including cultural and contextual information, its ability to fully grasp and respond appropriately to deeply contextual and culturally nuanced situations is limited. Understanding that derives from lived experience and social interactions is difficult to encode in algorithms.

3. **Ethical Reasoning and Moral Judgments**: AI systems can be programmed to follow ethical guidelines or simulate certain ethical behaviors, but they do not possess innate moral understanding or the ability to make genuinely ethical choices. The development of AI ethics is largely about aligning AI behavior with human values, rather than the AI independently understanding or developing these values.

4. **Creativity and Innovation**: AI can generate novel combinations of existing ideas and even create art or music that seems “new”. However, its “creativity” is fundamentally different from human creativity, which involves intention, emotion, and a deep understanding of cultural and historical contexts.

5. **Empathy and Emotion**: AI can simulate empathetic responses by recognizing emotional cues and generating appropriate responses. However, it does not truly “feel” emotions or empathy, which are integral to human interactions and decision-making.

6. **Philosophical Insight**: AI can process and provide information on philosophical concepts but lacks the capacity for genuine philosophical insight that evolves from personal reflection, deep contemplation, and existential experiences.

These boundaries suggest that while AI can augment and enhance human capabilities in many ways, there remain intrinsic aspects of human intelligence and experience that are currently beyond the reach of AI technology. As AI continues to develop, understanding and respecting these limitations is crucial for ethical and effective application.

Pieter Joost Verhoef