In previous issues, we discussed in-depth self-driving cars and technologies, and we also dealt with artificial intelligence. If these topics have any common denominator, it is definitely data and data processing. As it was already mentioned earlier in a conversation, “data is the oil of today”. However, the above-mentioned topics are only the tip of the iceberg, the statement is not an exaggeration; it is enough to think of most GDPR regulations, which affect everyone. About how this slightly elusive concept becomes more and more part of our everyday life, how seriously we take it, and how much we can look after it, we interviewed Arthur Keleti, a cybersecurity expert and futurist.
Arthur Keleti is an IT-security strategist, founding member of the Voluntary Cyber Defense Collaboration, and a cyber-secret futurist. Undoubtedly, he is passionate about his work, as it is evidenced by the fact that our one-hour talk has evolved into a three-hour, spontaneous mini-seminar by getting into a heated discussion on the topic.
Arthur arrives extremely elegantly, and I immediately got a feeling of déjà vu seeing his trendy checked suit. If a pipe had come out of his jacket pocket, I’d have immediately called him a digital Sherlock Holmes. I hope he will not be angry with this comparison, and neither will it make you smile. Not least because the next conversation will go along with logical reasoning that could easily earn my partner this new nickname. I am also quite sure that many of us have already faced one of the topics that we discuss later, only the facts have remained unspoken. Then, as a good Watson, now I’m going to contribute to the topic with my questions.
Andras Barna (B.A.): I understand how one becomes a cybersecurity expert, but how does someone become a futurist?
Arthur Keleti (K.A.): I would kick this off from a little further. I think that among those who deal with security issues, curiosity is usually the motivating force; a person is driven by how something works and how it can be disassembled. And then those who are interested in defense, too, will want to make things more secure.
For me, everything started when I was a kid, at the age of nine, when I got my first computer, a floppy drive version of Commodore 64, which was a big deal at the time. In fact, from this time I began to wonder how things worked. The gift also brought in another aspect to my life as a small group of friends formed around the computer. We focused on how to hack games (laughter). This could have been carried out without any consequence at that time. The circle widened later on, I also created music, and I wrote a lot of demo programs with my friends about how Commodore 64 works. In short, I could say that I grew up with the system.
As an added value of this, I have already learned things through my teenage years that I still benefit from. For example, I learned how to work in a team, meeting deadlines; and that my work also influences the work of others if I don’t do something on time, and so on.
B. A.: I already see your point of departure, but how did you move towards the security area?
K. A.: I went to school in Canada, where I studied IT too. But I have to say that today among Hungarian IT security experts you can find professionals ranging from chemists to geophysicists. This segment of the IT industry is very unusual in its sense of obsession and professional love.
In response to your question, in the mid-1990s, I was hired by a pager company called EasyCall. At the time, this seemed like an opportunity for the future. I was responsible for the environment of internet services, and it quickly became apparent that the internet could be a very dangerous area if it was wired into an organization.
Just to be clear, this system worked by sending messages over the internet that we then sent to the personal pagers. Anything – from the content of the messages to their amount – could have been a risk. Not to mention the malicious content that was already present at the time, even though we’re talking about the dawn of the Internet. Later I moved on to the systems integrator company ICON, then to the KFKI group, and finally to T-Systems. The common point in the companies was that I practically worked on the security field everywhere and dealt with companies, business ventures and state organizations.
B. A.: How did futurism integrate into this?
K. A.: Quite often I was confronted with the question of what direction the security systems and the various attacks would take. The question therefore arises, whether these security systems will be able to protect us? It could be said that this was a taboo, that they did not like to talk about.
Former NSA Director Michael Hayden described the cybersecurity situation in the best way. He said that our society works so that when somebody feels bad, we know that the ambulance must be called, if the house gets on fire, the firefighters must be called. But who do you call when your computer is hacked in cyberspace and who do you call when your data gets stolen? Where is it, or how does public concern appear? On this subject, however striking, the state can only provide assistance in certain areas and under certain conditions. This finding is also consistent with analyzes that predict that in the coming years we will not be able to protect two-thirds of our sensitive data. By the way, this tendency is clearly outlined in the trends as well.
For years, two hundred days has been the average time for hackers to wander around in certain networks without being noticed. And about seventy days for system operators to start something about the illegal access in the system. The biggest problem is that so far we don’t seem to be able to change the magnitude of these averages.
This is a fight that looks like we can’t catch up despite our best efforts, plus there are technologies – artificial intelligence, quantum computers – that increase our exposure when they are in the wrong hands.
Because of this, I began to wonder if we would be able to protect IT systems of the future and keep them all connected. This is how I began to dive into the future of this topic. For example, in a baseline case that poses the question of whether our digital assistants will be revealing our secrets in the future, because, let’s be honest about it, they already know a lot about us. Just think of a little thing like a note in the calendar saved on your phone, which is known to appear on numerous platforms today.
B. A: To what extent does this line of work make a person paranoid?
K. A.: Security experts suffer from paranoia because they are sure that any system can be hacked because there is no 100% protection. Personally, I’m also sure there’s always someone who cares about my secret, no matter the type of secret. Because there are as many motivations for obtaining data in cyberspace as you can imagine, from publicly-funded hackers to white-collar criminals who move into cyberspace and make millions of dollars by blackmailing people or stealing their data.
Returning to the basic question, we, security experts are generally convinced that we are being watched, even now, as we speak. Even so, I usually say, and this could be my motto, that you shouldn’t be scared of everything but you should be structured even when you are scared.
A very simplistic but good example of this is the movie, Aliens. In this movie, the main character gets into completely inanimate situations, but she is not worried about everything, she is worried in a structured way; she knows what she’s afraid of and tries to focus on that one thing and ignores the rest.
We are actually working on structured fears, and this is nothing else but a risk analysis. This is what everyone, the private individual, does, not only with their data but throughout their everyday activities. For example, if people go to a concert, they will assess where to put their valuables, but I could have said that if they went to the beach, they would think the same.
This happens even when I have locks on my front door, I think about what, which type and how many I want. And I could mention countless things like that.
The turning point in all of this is that we do not do the same to protect our data. I discovered during my research that data is an abstraction that our immune system gives a different response to. For example, if I tell you that someone is waiting for your child at school, who is neither you nor a member of your family, you will feel an adrenaline rush, jump up and run away. In contrast, if I tell you that your sensitive information is being stolen or your email account is being hacked, you will feel nothing. This is simply because we have no evolutionary responses to it and we have not developed the digital immune system.
B. A.: This digital immune system may not have developed because the ordinary person, apart from their PIN code, has no business that is relevant in this regard. If I’m being very demagogic, then I might say who cares if they know I’m in Costa Coffee right now or I’m somewhere else.
K. A.: What you just said is nothing more than a risk analysis for yourself. But without teasing, I can, in general, disprove the example you mentioned. You don’t care who knows where you are, but, let’s say, if you’re cheating on your wife, you’ll probably be wondering if your wife knows where you are in the afternoon or if she can see your phone’s physical location, etc. But this assumption can be translated to any situation. For example, if somebody else knows where your child is, that’s interesting. Otherwise, the examples clearly show that this is not only content-dependent but context-dependent, too. For example, some things are not interesting on Monday, but they may be on Wednesday. There are many cases to mention, for example, a disease that your neighbor may not be interested in, but your employer or your family is. Let me tell you another scenario: if your sexual orientation is different, do you want everyone to know? I also deal with secret research, which is a very difficult area to explore. Without going into the details now, it can be stated as a thesis that it does not matter if someone is not interested in something when it is something that everyone else is interested in. What sensitive information is, that’s unclear. Also, I have to add that it is not clear in the people’s heads either. For example, when we talk, our biological drive influences what data we provide. This is not the case of the Internet or cyberspace, where we have no control over our data.
B. A.: There are so many pros and cons that it is getting more uncertain than confirming. If all is as you said, is there be a solution to this situation? Or will it take one or two generations to grow up and develop a digital immune system?
K. A.: The generation that is now growing up is already living a life in which data is being generated about them. In fact, even before they were born, their 3D ultrasound image was published on the Internet: they are unknowingly there without being able to say anything about it.
Some studies suggest that solving this problem may be a kind of transparency. That is, not to be locked in, but to accept that lot of our data is publicly available. The generation that is now growing up lives a life that is more public, because they are aware that many people know where they are going, maybe because they were checking in somewhere. So this change in attitude has already begun, but social scientists are yet to say how much we will be able to get used to this absurd, unnatural idea. But somehow it’s going to work, for sure.
I think that people’s natural tendency to adapt will help to process this new situation, but without machines, it won’t work. So I can give a bit of a transhumanist answer to the question of how to build digital responsiveness when I predict that increasing collaboration with machines will provide a solution. This means that a lot of decisions will be made with the help a machine, and the same machine will protect you from potentially disclosing some sensitive data.
B.A.: How will it be possible to base this on new legal foundations?
K.A.: As far as the law is concerned, we are facing giant challenges: the rigid system is unable to keep track of the problems due to their complexity and extraordinary variability. This is easily captured in the fact that up to a few years ago we attempted to describe hacking with words otherwise associated with ‘cheating’or ‘scam’. Even though the situation has somewhat improved lately, the legislation still regards hacking as a breach into a system, whereas, for example, for me to send you a letter of blackmail no intrusion is required whatsoever.
I have already begun a hypothetical dialogue about this with a couple of progressive lawyers. The discourse does not rule out the possibility of altering our standards. It is likely that in the future – however strange this may sound – some of our laws will be written and enforced by computers. A substantive set of standards needs to be put in place in order to address issues related to digital systems and cyberspace. It is, however, already apparent that we will have to do this in a way that is different from the traditional method where every fifth year parliaments try and make decisions based on the situation at hand.
Today, business organizations – the likes of Facebook, Google, Apple, Alibaba, etc. – make up for the missing sets of standards and regulations. Simply because for them it is critical that your data remains protected. But if something goes wrong, for example, if people see too much or too little, then it is Mark Zuckerberg that gets summoned by the U.S. Senate.
Today, the bulk of the control is exercised by businesses – because there is no one better, more innovative and flexible in doing so. There is, for instance, the question of cryptocurrencies, a huge risk factor in our current monetary systems. Could there be a situation where it is not up to a bank to tell if I have money, don’t have money, or where is my money?
Let’s not dive deep into this, here’s a brief example. Next year, Facebook plans to introduce Libra, its own cryptocurrency. The Senate was swift to summon Mark Zuckerberg and question him about the plan. Zuckerberg said that although he’d be more than happy to employ the current financial systems, they are just not developed enough. Should the situation require, it is not possible to send money with the press of a button. Let’s face it: it’s not a big ask, still, it’s impossible in today’s banking systems. Especially in the United States.
B.A.: Obviously, it’s not in their interest.
K.A.: Sure. But it’s in my interest as a user. From this point, it’s up to the market and – if I may add – there are already several such systems in place, including one in China.
B.A.: How do we stand with cybersecurity on a domestic level? Do we pay enough attention to it?
K.A.: The standard is different. The government started dealing with cybersecurity more intensely recently. In my opinion, while local governments of smaller countries could be five years behind, the dangers are the same, this is a global network without physical barriers. Just like in any other country, the financial sector is the most developed area. Money is a point of reference: everybody knows that until they have some they are in some sort of security. This focus on money is visible on a state level, among regulators. The segment sits beyond other industries as other actors are by far not that well off. In many industries, including car manufacturing or pharmaceuticals, – contrary to banking – production does not happen through IT systems. Even though these systems also function on an IT basis, they don’t store data, the manufacturing takes physical shape, making IT security seemingly less crucial. These companies often believe that manufacturing robots don’t have to be as protected as data on a bank’s server. But, in fact, they should be. Recent experiences show that hackers have already taken shots at such factories. And not only them but also their suppliers, full of industrial secrets, economic information and even personal data.
We are far behind when it comes to the security of families and children. Some initiatives have already begun, but we are still very far from reaching a desirable level. By now we should all be handling it as an evident, natural topic – yet we’re still very far away. Sadly, people only care about cybersecurity once the lightning has already struck.
B.A.: Probably because – in their thinking – there is no problem until there is a problem.
K.A.: Well, yes – but then there is a big problem. The only reason I can’t understand this train of thought is that there are things we do regardless. For instance: if I can see it through the window that it’s snowing outside then I’ll make sure to put on a coat and a hat, or else I will catch a cold. I don’t get it why people don’t include this logical step when building IT systems. The construction of the security infrastructure, something that is often left out of the plans, only costs 2-5 percent of the whole investment. Meanwhile, posterior instalment can incur ten times as much, not including collateral problems.
B.A. Let’s switch topics for a second! I make no secret about my special interest in self-driving cars. There, almost everything is interconnected and communication is expanded. To what extent can we secure these systems?
K.A.: I would not limit the answer to cars, because besides cars there is an array of things that create millions in data. Today’s airplanes, for example, contain up to fifty thousand sensors, producing several terabytes of data per flight. It’s not data creation that is the problem, it is the processing and protection.
At a 2016 conference, I took part in a cybersecurity drill that revolved around a crisis situation in which cars suddenly stopped working. Or they didn’t already stop, but their screens displayed a message that connected the attack to a group of hackers. As it turns out, hackers infiltrated the car manufacturer’s IT system and planted ransomware on car software updates. In the imaginary scenario, the hackers didn’t arm the malware – they published their claims and demanded the government to meet them, or else they will shut down everything that uses the software. The storytelling, of course, made sure to address police cars, ambulances and fire trucks. This is a very brief abstract of the drill, as the story progressed it also escalated. What’s most exciting: this took place in 2016 at the conference of EU security leaders where several car manufacturers were also invited. At the end of the simulation, they were asked whether this could happen in reality. At first, of course, they said it would be impossible. After analyzing the case, however, it didn’t seem that absurd any longer. This was four years ago: just imagine the progress we’ve made during this time.
With every system we automate and integrate into our environment – by installing it in more and more robots, cars, devices, surgical instruments – grows our exposure. The more they are interconnected the greater the risk. The only way to combat the risk is through methods and software that – more or less – automatically detect, repel and report incoming attacks.
B.A.: With a saying: it’s only a fine line that separates the good cop from the good thief, although that’s a very important line. This statement could be true to hackers and cybersecurity. The question is: To what extent should cybersecurity systems come down in a single hand? Could it pose a threat if, let’s say, somebody joins the dark side?
K.A.: The question is good and effective. There are many ethical hackers who have engaged in unethical activity in the past. The fine line that you mentioned is a very important one as it assumes a moral switch. The significant question is where does one stop. Security professionals always say that they stop after demonstrating the vulnerabilities but before causing any damage.
Although there is no “mastermind”, someone who keeps track of everything, when designing the system, we cannot exclude the possibility that a person knows its faults. If this person is unstable and open for corruption then it can happen that he or she reveals these faults. There are, however, built-in control mechanisms that protect access points, usernames and passwords to avoid single-person accessibility, the system’s Achilles heel.
B.A.: I don’t want to be stroppy, but if they know that there is a fault in the system, then why don’t they fix it? This would be the obvious solution, wouldn’t it?
K.A.: You are not being stroppy, this is a perfect question. A system can have many types of issues. There are some that we don’t know about, so they get included but we discover them. There are errors that you know about, you discover them and decide to leave them there. The main reason for both is the business environment. All systems – and this is not exclusive for security – have faults that are left unfixed due to economic, strategic and other reasons. On one hand, there are technical reasons: today’s complex structures can include millions of lines of code – making it impossible to uncover all errors. Even if you would find and list all of them, you would end up fixing only the first five or ten – because this is what you have enough money and time for. There is another situation when decisions need to be made in proportion to the level of risk. In the end, this or that errors will be left uncorrected.
B.A.: Let’s frame this conversation and end it on a more personal note. What is your personal cybersecurity like? Are you worried about your situation?
K.A.: This is a very difficult question because my situation is special. It is special because there are multiple security approaches to follow. Most security experts would refrain from being on stage, they wouldn’t share information about themselves. An effective method of protection is to stay silent. I am a sort of representative, spokesman of my profession. Therefore I cannot refuse to tell you in an interview like this where I came from, what I did, what my opinion is – it would end the conversation. This renders my situation a bit more difficult. I need to watch what I say while actually saying many things. To top that, I combined this with futurism where I have to try a range of things that are not yet safe. It’s not possible to talk about the future of something without knowing what it’s like and how it works. So I don’t feel safe. For a good number of years now, I’ve been sending all of my emails with the certainty that someone else other than the recipient might read them.
I’m trying to pursue a security and risk-aware life, but I have to confess: I see little chance that I could do it efficiently – a thought that keeps me up at night.
Original of this article and the photos are from We Are Men magazine (2019/4)