- Apple has announced it as a new feature of Siri that gives more autonomy to the assistant’s capabilities
- The danger is in how this technology is developed and what uses will be made of it in the future
- The iPhone 16 Pro Max will have a new main camera and a 48MP ultra-wide-angle lens
One of the great limits of Artificial Intelligence, and one of the most repeated arguments that we have been given to reassure society that AI is not going to end our jobs and other pertinent concerns, is that this technology requires human commands to do the things it is capable of doing.
For example, the AI cannot replicate the Mona Lisa, if there is not a person behind it who has set a prompt telling it what to do, that is, until now we have always thought that the AI was always under our orders.
However, the rapid development of this technology has surprised even the most experts in the sector, and we have already seen how the godfather of AI himself has reiterated his concern about the path that this technology is taking, at the same time that OpenAI, the leading company in the sector, is experiencing a huge internal crisis around the security and control measures of this technology.
The problem is that AI is getting smarter and more capable, and while on the one hand this is good news for technological development in general, we also have to be careful, because we are slowly approaching a level where machines are going to be very intelligent and who knows if they will ever gain a consciousness.
The latest example of this is the announcement that Apple has made as a preview of the many things they are going to present in June during the Worldwide Developers Conference (WWDC) 2024. This is a novelty that will be introduced with iOS 18 in Siri, the virtual assistant of the company’s devices, called Proactive AI.
What is Apple’s proactive AI and what is it for?
As explained by Apple, in iOS 18, Siri is going to receive an update to improve its capabilities based on proactive AI. In this way, the assistant will adopt a more conversational tone and incorporate functions that allow it to be more autonomous, and depend less on the user’s commands.
Among the most notable actions are that Siri will be able to automatically summarize notifications and articles, as well as transcribe voice notes and improve the current capabilities to automatically fill the calendar or suggest the use of specific applications. In this way, AI will be able to anticipate needs and suggest actions to facilitate and improve users’ lives.
Why should we care?
While it’s true that at the user level, these new capabilities sound great, won’t make your life easier, and seem harmless, it’s important to look beyond that, and this announcement of proactive AI means that such technology has more and more reasoning capabilities and control over itself, and as in everything, when it comes to simple and well-intentioned actions, It’s all good news.
However, what if one day proactive AI becomes so smart that it suddenly decides it no longer wants to serve users and do whatever it wants. Or, to give another example, a hacker intervenes in the AI and makes it proactive for malicious purposes, for example, hacks and confuses it so that the AI itself is the one who shares all the passwords that the user has saved on their device.
There are many possibilities of what could happen, and although the arrival of proactive AI is initially good news because it will make our lives easier, there are many security issues that, in the absence of control and regulations, should put us on alert about this overwhelming development of AI
Related: We already know some of the emojis that will be part of iOS 18
Comments