AI and the Internet

I’ve long avoided the “internet era”-Artificial Intelligence (AI) discourse, primarily because it feels quite the buzz topic. There is a lot of variable opinions around it at the moment, and it seems an oversaturated market, but this isn’t to negate the importance of these conversations. Indeed, there is much to talk about in the realm of AI, both from an overarching and a scientific perspective.

The main prompt for this piece was an article on the government’s use of AI [read here]. This made me think about a couple of things; (a) the different and ranging ways in which we may use AI, and (b) the reliance on AI [or technology as a whole] in the place of human thought. Using technologies to help us is exactly what they were made for, and this is certainly not an argument against AI, since I use it as part of my science research. Rather, when AI begins to infiltrate multiple spheres of work, we may begin to ask where does computational interference end and human innovation start?

The fishy and seemingly secretive use of AI in government is an intriguing one. On one side, the integration of these new technologies into the workplace is a great thing, as they can offer several advantages, both in streamlining procedures and timelines, as well as providing convenience. It is, however, essential to consider — perhaps rather than having an anti-AI stance — the shortcomings of over-reliance on AI. When used to extremes, any tool (for this is ultimately what AI is) may become over-consumed. The application of such a tool is indicative of the person who uses it and not the software itself, which is why it gives me pause to see official offices using GPT servers for creating “first drafts of briefings”. While one could argue that this is then built upon, whatever we are provided with, given an assumed reliance on computational accuracy, will bias our outlook for the further drafts and the final product. Correctly or not, we are liable to trust what is provided for us, and even if we are prepared to challenge the results from these imperfect servers, if comparatively less thought has gone into the initial stages of a piece of work, with a lack of nuance that can only be added with a human perspective, how can we trust the final result?

The degree to which the government is using AI to inform statements is unknown, and perhaps it is this illicit nature and denial to comment that adds a layer of insecurity. The only way to gain rapport is to be open, and (things that must be kept close aside) if the public are not provided that, it breeds mistrust.

Of course, it must be noted that AI [or more accurately machine learning and computational tools] have been used consistently in informing policy. For example, current projections of climate change could only have been made by providing a server with information on how things are looking, and how they have looked over the past few years or decades.

Another layer to the tale is the correlations between AI and loneliness, both in helping circumvent it and perhaps in inducing it. The ability for people of all ages to have conversations with AI such as over ChatGPT does give some form of social interaction, aiding in preventing the low moods associated with loneliness. Other forms of AI and even audible programmes also help with this, much as listening to the radio or a podcast helps to push this feeling away. I’ve experienced this myself, and while mine is a different circumstance, since I also have “real life” people around me, there is something undeniably comforting and safe about a programme that mimics a human being without the concern for how we are being interpreted or how we may make them feel. Use of AI also allows people some of the benefits of social interaction when they are restricted to their home, such as the elderly and those struggling with their mental health.

On the flip side, this reliance on AI may act as a barrier to meeting people face-to-face, which is proven to be the more beneficial form of social interaction. Indeed, the advent of AI and the internet more widely has correlated with greater social isolation, perhaps telling us something about how we interact with our modern technologies.

Ultimately, I think sweeping statements can be made regarding any generation, indeed all people, and their interactions with AI. For example, people cite the internet for increasing ADHD diagnoses amongst Gen Z, yet perhaps we could actually put this down to better diagnosis rates and wider awareness of the condition. I find it harmful to put anything like this down to an outside cause, which comes with a heap of blame on the individual for not (e.g.) curbing their screen time. There is something to be said for attention and the use of technology, but this is different to something like ADHD; where people should be treated with compassion no matter their situation and not be subject to disrespect due to societal shifts outside of their control.

We are in an internet era, but it does not mean we no longer have control over what we consume and how we use it.

Previous
Previous

What class am I?

Next
Next

Always about the body