We are at a time of year where it is customary to look back to try to understand and draw some conclusions from the past twelve months. I feel that this article will be my humble contribution to this tradition, for what it's worth. Of course, the areas in which I am moving the most are computer science and teaching – although I try to do teaching rather than teaching as much as I can – and in these areas the main trend for 2023 has undoubtedly been the emergence of artificial intelligence.
As I have had the opportunity to comment here and elsewhere, it is worth keeping in mind that the AI technologies being applied recently are not really anything entirely new or innovative. It is only because of the scale to which it is applied and the impact it has socially, educationally, and no doubt economically as well.
The main idea I want to put forward is that it is just a phase, an evolution of existing technologies and not a real revolution. Having machines that can interact with people using natural language is no longer new. Software like personal assistants Siri, Alexa, Cortana, and others can already do this. It could even be argued that they performed better than today's interfaces, as they responded to verbal rather than written commands. The difference has come with the integration between natural language models (those parts of the system that talk to humans) and the large databases that underlie them. I expect, in this sense, that there will be more and more interconnectedness, meaning that verbal assistants will improve searches, but we should also be increasingly clear that any of our online data will be available to more AI. People, programs and applications. As they say, we have reached a point where we have to learn how to balance the convenience of using these technological means (a very real convenience that allows us to do many jobs more quickly and efficiently) with the risks of seeing our future. Information is scattered in the (electronic) wind.
The 1960s were an extraordinarily productive decade, both from the point of view of technological progress and from the point of view of human thinking about its uses and potential dangers. This is how the science fiction author known as Philip K. Dick with an Attitude has many similarities to his famous novel Do Androids Dream of Electric Sheep (1968) in which Androids dream of electric sheep? It was the inspiration for the movie Blade Runner. At the heart of this dystopian tale is the conflict between humans and robots in a post-apocalyptic world recovering from nuclear war. The main problem that both groups face is the distinction between what is human and what is its technological version. One of his conclusions, quite obvious, is that at a certain level of development, it is very difficult to distinguish between the two forms of intelligence. The author even points out that to some extent it is not useful or necessary.
Returning to real life, there are notable differences between us and artificial intelligence. The first is that artificial intelligence is not as advanced in real life as it is in science fiction. Personally, I don't think it will ever do that, considering that the goal of their creation is not to clone humans – we have biological means of doing that very well – but to manufacture specialized tools. The second point to keep in mind is that you need to allocate a lot of resources to achieve high-quality AI, and these servers and connections require significant financial resources.
So when I use AI, which I do sometimes, the two questions I always ask myself are: “How much does all this cost?” and “Who should I invest in?”
A very good start to 2024 for everyone.
“Infuriatingly humble social media buff. Twitter advocate. Writer. Internet nerd.”