March 21, 2023

Sergey, let’s start with the topic of personnel, one of the most painful in 2022. In the height of summer, it seemed that IT specialists were fleeing so that no one was left. Then in September it became clear that they were now really running. Now some of the fugitives have returned. Is the confidence crisis over? On the other hand, judging by the sounding ideas about passport stamps and work restrictions, there remains something that looks like state resentment.

Sergey Plugotarenko: I am absolutely convinced that those who have left should be kept in our ecosystem in every possible way. In no case should they be painted with treacherous colors, in no way should they be discriminated against. Those who have left are subject to both the inability to buy a quick return ticket and price increases, but this is their choice. They think they are safer there. And if the state starts to invent some kind of rack or stamp in the passport, I think this is a very bad trend. Even discussing it is bad, let alone doing it. Because in this way we will show that the state respects IT specialists only in words, but in reality it is ready to deal with those who looked to the side. IT people still know how to create products with the head, and for the head to work, there must be certain conditions. And it is impossible to convince them that comfortable conditions, a warm climate, a safe environment, high profits – we will take all this from you, but you will still produce good products. I will not do it

Is the attitude of employers towards them changing?

Sergey Plugotarenko: I often hear: “They become toxic anyway!”. It means that some people want to come back. And the summer measurement showed that there were already more than 50% of them, but this was before the partial mobilization. Others project their fears onto their own comrades who remain in Russia. All this talk about running. Well, and the third parties who left, but now they scare those who want to return home with stories about Russia. And this is really a problem. But a man is a man, he cares. And this does not mean that they should end, or even end them. If everything is wrong with their mood, it is necessary to raise this mood and try to make them want to come back. I think this is the main task.

Maksut Shadayev said that security problems are the main ones. And that until this problem is eliminated, there is no need to wait for something. What do you think the industry can do about it?

Sergey Plugotarenko: I think the industry has already done everything possible. He ran faster than others towards the profile of the minister, who in February-March took charge of her and began to defend her according to the parameters that we proposed. After all, this was not born from space, it was the result of the work of the operational headquarters of the Ministry of Digital Development, with which Shadayev held meetings. He then tried himself as a communicator personally with companies and personally with users. He was on all the platforms where they communicate with each other. He showed that he was ready to listen to them.

Let’s really get to the industry. 2022 showed us that in the field of software we are doing quite well. Sometimes it can even be redundant. In particular, not so long ago, three operating systems out of almost two dozen were identified, on which market participants will focus. At the same time, there is a sense that there are no clear leaders left in the market, making it difficult for consumers, including corporate and government clients, to make a choice.

Sergey Plugotarenko: In my opinion, this is an ideal situation for business development, which any country or the European Union could only dream of. It is something else, indeed, users, businessmen, the State, can get disoriented at certain times: what to use from this, what is better, what is worse. But no one has yet dissuaded me from the fact that the market economy is still working well, if we talk about the b2c segment, of course. And the one that proves to be the best for the users should eventually attract the users and become richer and more successful. The rest just have to close or be bought by someone else.

In the state and b2b segment, of course, everything is different. There must be clear criteria why a particular product is chosen. And here independent consultants are very important, producing analysis and market qualifications. In fact, becoming such a consultant is one of the new tasks of ANO Tsifrovaya ekonomika. We must learn to advise certain solutions or technologies, software and methodology so that certain categories of government, companies and maybe even users feel good.

A more complex topic is artificial intelligence. This is one of the most significant state projects. But one gets the feeling that the more it is promoted, the scarier this topic is.

Sergey Plugotarenko: Artificial intelligence is just something new. And there are many horror stories here, as in any innovation that works with data. In addition to the specific ones: having someone take your workplace from you. It’s a tough story, but you have to work with it. Search for these horror stories, not blur them, but dismantle each case. And draw conclusions: legislative, regulatory. It is very good that there is an AI Code of Ethics, and companies are joining it. It is very good that there are evangelists, and at the highest level, who are ready to tell the president what tasks AI will help to solve.

Another issue is biometric data. Here there are even more fears, and all this forces the State to establish stricter rules of the game. Both in terms of biometrics and data in general. We see the regulation of biometrics, in particular, the President signed the relevant law just before the New Year, initiatives on advisory services.

Sergey Plugotarenko: The issue of trust and transparency plays a key role here. The most important thing, because everything is in the mind. To the extent that the user trusts the business, do not trust it, see the benefit, understand that it is transparent, the user will be so involved. Proof of good intentions can only be done through increased transparency and evidence that it is safe. If we talk about the Unified Biometric System, then, of course, this is a complex issue. On the one hand, users don’t really trust biometric technologies, on the other hand, companies have really done little so far to earn that trust. But if we want to develop biometric technologies, we cannot do without the interaction of companies with the State, without taking into account the actual business processes.

However, there must be another part. Sometimes users are less paranoid about their own security when they see some kind of benefit. I’m not sure if it’s good, but it works. In social networks, we post information for some kind of social support, for likes. We give biometrics to Apple or Samsung, because it’s more convenient than entering a password. Etc. It is necessary to prove that the number of sweets will be much greater than the number of risks. Take care of risks as a prudent state that protects your personal data, and, in principle, socially disciplined domestic businesses. This is serious work, but without it nothing will work.

And in the case of recommendation services and fears that they could be used to manipulate people, what to do?

Sergey Plugotarenko: It is important to understand what we mean by “transparency” of the recommendations. Goals can be made transparent, data usage can be made transparent. The work of artificial intelligence systems is complex, and for the user it is more important to have a common understanding of why this or that recommendation is displayed and stands out from the rest. Rather, it is convenient to decide what we want to avoid in the operation of recommendation services and, based on this, deduce the general operating principles that digital services must follow. Companies can cope with this task themselves as part of the development of self-regulation. But for this it is necessary to find mutual understanding with the state and jointly answer the question: what negative aspects do we want to protect the user from? When developing any proposal, it is always necessary to seek balance and take into account economic efficiency, otherwise we ourselves can stop the development of digital technologies in our country. After all, it is the Russian mass user services that are the main engine for the development of Russian-language semantic technologies. And what is being proposed now – the ability to turn off recommender systems – will lead to a deterioration in your work and you will not achieve the tasks set out in the decree and the federal project “Artificial Intelligence”.

However, are artificial restrictions needed for working with data, for example, so that everything is not collected, as they say, “in one basket”? And are there areas where, in principle, you would not allow AI?

Sergey Plugotarenko: Our digital twin is perfectly assembled by any mobile operating system that is not ours and that we carry in our pockets. So it’s probably a bit redundant to say that it will become more dangerous if someone else does it, but at the same time under the supervision of the state, regulation and under the threat of the wildest fines in case of violation of your privacy. It seems to me that this is another of the horror stories. I carry Android and iOS devices in my pocket. I have two watches. And any part of my voice, geotagging, what I write on the screen – all this automatically goes somewhere. I’m not talking about transactions, mail, correspondence. So probably yes. The collection of data in one place, its aggregation from different sources is a certain degree of danger. But, on the other hand, this is already happening, if we talk about users. With business, things are probably different. And it is not protected in any way, it is not guaranteed in any way. Therefore, there must be reasonable legal restrictions. The feeling that you will be fined if you do something wrong, increase the digital literacy of the users themselves, fight against corporate leaks, etc. – nothing new.

As for where not to let go of AI and other technologies, I would answer that there are those areas, but I know they already went into there. For example, analysis of all my correspondence, analysis of all my photo sequences, analysis of my purchases. At some point I moved my hand: I allowed my photos to be collected, uploaded to the cloud, analyzed and classified by faces, geotagged. Yes, I understand that this, of course, is such a big gap. But it is still very convenient. Apparently, the frontier lies somewhere where the use of AI will really depend on people’s lives and health: in such areas, of course, human supervision is needed for the algorithms.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *