Bytes of Ingenuity is a new podcast from Siren Associates and Siren Analytics where we take a look at innovative approaches to solving today’s toughest challenges. Whether diving deep on a particular issue – from social protection to AI election interference – or exploring how disruptive technologies are changing the world as we know it, we’ll bring you a range of critical perspectives on the latest trends in governance, security, information integrity, justice and rights. If you’re a tech enthusiast, entrepreneur, development professional or interested citizen, subscribe to Bytes of Ingenuity for insights and expert analysis to stay ahead in today’s evolving digital world. You can follow the podcast on the platforms below, or check the Resources page of this website to find all episodes.
Exploring legal AI’s potential
In this episode of Bytes of Ingenuity, host Nick Newsom explores the transformative power of legal AI and Large Language Models (LLMs) in the legal realm with computer science professor and AI expert, Amer Mouawad. As LLMs capture the public’s imagination, this discussion delves into how they can revolutionize legal research and elevate legal practice. Amer reveals how LLMs can provide relevant suggestions and efficient assistance to legal professionals, saving them valuable time. The episode also highlights exciting legal AI tools in development, including those that simplify legal jargon for citizens and fact-check public statements for legality. Addressing concerns about data confidentiality, Amer emphasizes the importance of offline LLM hosting to ensure confidentiality. Additionally, he critiques the politicisation of AI development, advocating for open-source models and collaboration between diverse experts to create tools that serve the public good.
Episode overview
- 01:20 – What are LLMs and how do they work?
- 02:15 – Grounding LLMs in legal context
- 05:33 – Legal AI and the future of legal practice
- 10:56 – Confidentiality and data protection
- 13:59 – The impact on labour and education
- 17:22 – Democratizing legal expertise
- 19:04 – Legal fact-checking and misinformation
- 20:34 – Digital literacy, AI safety and open-source models
- 25:00 – The role of the state and data availability
- 29:04 -The state of AI in the Middle East
- 33:19 – Blending theory and practices
Further reading
Transcript
Click for full transcript
Nick Newsom (NN)
Hello, Nick here, and welcome to Bytes of Ingenuity, a podcast where we take a look at innovative approaches to solving some of today’s toughest challenges.
By now, most listeners will have probably used a generative artificial intelligence tool. Perhaps ChatGPT has become indispensable in your work. Or maybe you’ve experimented with a tool like it, but found it not really to add a great deal of value yet.
The latter is possibly the case for people like lawyers, who in their daily work draw upon a highly specialized knowledge base that current AI tools have not been trained on or don’t have access to. In this episode, we’re looking at how that’s going to change with advances in large language models, the technology behind these kind of tools.
I’m joined by Amer Mouawad, professor of computer science at the American University of Beirut and head of AI at Siren Analytics, a full-service digital transformation agency. We’ll cover how LLMs are being improved, the dilemmas around increasing their ability to reason, how they’re going to change labor and education, and the need for collaboration to trump competition and geopolitical interests when deciding how to develop and deploy AI.
So I’m going to kick off with a super easy question for you. But for the uninitiated, what is a large language model?
Amer Mouawad (AM)
Okay, so imagine I asked you to complete the following sentence. Like, “the apple, uh, fell off..” … you would immediately get “the tree. So. But how would you make a machine figure out what the next word would be?
So one obvious way to do that is to actually go over the whole internet, look for every place where the beginning of that sentence appears, and check for what is the next word that comes after that. And most likely, you’re gonna figure out that the word that appears after the apple fell off the is going to be the tree. And basically if you add just a tiny additional fancy derivatives and math formulas on top of that, this is what LLMs are actually doing. They are figuring out how to complete, or how to find the next word in a sentence. By looking at the distribution of words over the internet. It’s as simple as that.
NN
So I know that you’ve recently been working on a number of projects, using large language models as assistants for legal professionals, and the idea is that LLMs can be grounded on particular, like legal texts or statutes to help suggest relevant articles or precedents when putting a case together or when, you know, ruling on one. What do you see as being the main benefits of applying AI to this field in this way?
AM
All right. Before I answer your question, let me take it one step back. So, I think at least the way I see it, I think every field where text is one of the main driving forces, like the legal field, I think there’s a revolution coming, because everything that is text based LLMs can now do almost as good as humans. Right?
Now of course, it doesn’t mean that everything is going to be automated. The human in the middle aspect still needs to be guaranteed. But I think all those industries that deal mostly with text need to be careful. They need to basically watch out. They need to follow the trends, because there’s definitely a lot of major change coming to those areas. And legal is just another one of those because and in legal everything is text based, right? The laws are written in text. The cases, the legal cases, the court cases, everything is in text.
And as you correctly mentioned, using LLMs out of the box for any industry just won’t work because LLMs have a lot of problems associated with them. One of the biggest problems is that they hallucinate. For example, they basically spit out continuations of sentences that make no sense to what you’re actually trying to do. And it’s very important, especially in sensitive fields like the legal field, to make sure that we do not allow such occurrences.
The way we do that with LLMs is called grounding. So instead of allowing the LLM to generate words almost completely at random, we basically force it to choose words from within a very specific context. And this is what we call grounding. So basically, we will force the LLM to generate answers, for example, based on legal documents. And this way we know for sure that it’s not just retrieving a random answer from the internet, but combined with the fact that LLMs understand language, they know what things make sense together, and you add on top of that the grounding into a specific context, you know that you’re going to get human like generated text. It’s sometimes even better than what a human can write. And that is truly the power of LLMs.
And when I say language, really, it’s not only the English language, it’s any language that has a lot of data out there on the internet. But you can also go beyond just the spoken languages, like programming languages like Java, C++, JavaScript, like all of these are languages. And the LLM can do great things with those. So, everybody working with those should be basically evolving along with the technology, because it’s going to change how we do a lot of things.
NN
So as a lawyer, then you would expect, like if you put in a question about a particular case you’re working on, it might be able to kind of summarize the relevant legal arguments or past cases. Is that right?
AM
Absolutely. So, let me give you a little bit of history, why LLMs came to be. There were two major problems that scientists were struggling to solve in natural language processing, which is the sub area of AI that deals with language. And those two tasks were translation and summarization. These were the two remaining tasks resisting almost every attempt we threw at them for basically getting good results. And basically, the researchers tackling those tasks, they figured out that the big problem in the previous attempts is that we lack context. When you’re trying to translate a sentence, you cannot just take it out of context, translate it, and then put it back in. It just doesn’t work that way. You’re going to get a poor translation quality.
This is where transformers came to be, which is the architecture underlying LLMs. And they were basically designed to solve these two things. So when the research began, it was literally just trying to solve summarization and translation. And the fact that LLMs had such huge capabilities beyond that came as a surprise to everybody. Nobody knew they were going to be able to solve so many tasks just because of one, believe it or not, minor change in the architecture. But it was that minor change, the transformers, along with some extra things that I’m leaving out intentionally as they’re just too technical, plus the huge amount of training data, that gave us the LLMs that we have today.
So if you go back to legal, now translation and summarization is kind of an easy task for LLMs. It’s what they do best and what they were basically designed to do. So, I think the legal domain is going to be taken by storm, because now instead of having to read tons of documents to know what laws are relevant to a case you’re looking at, you could literally just ask the LLM. It will spit out the answers in seconds.
NN
A massive timesaver. So you can imagine that it enables lawyers to spend more time focusing on the more complex issues. Uh, and or maybe dealing with the client side, which can probably be just as challenging in many cases.
AM
Absolutely. The human side.
NN
So I understand a more advanced stage of your ongoing work involves grounding these LLMs on jurisprudence and legal doctrine, and these are more open to interpretation than legal statutes and would enable the LLMs to better handle cases where the law is ambiguous or silent on certain issues and like kind of go into the details of the implications a certain legal decision. And with this, do you expect that I will be used to automate the more complex tasks, such as automating legal arguments or negotiating contracts?
AM
So that’s, that’s kind of a sensitive area. I think a lot of people are divided on how much we want to allow the LLMs to be involved with that regard. Because when things are open to interpretation, who is to say the LLMs has the right interpretation? And I completely agree with those people, but the way I see it is – and that’s basically a fight that I’ve been having with quite a few people – people want to be afraid and do nothing about it. I like a different approach. I want to try and I want to see what happens. I want to fix the results if I’m not happy with them. And if after I do my testing and I’m and I’m still not happy with what’s coming out of it, then I would decide, okay, this is not the right approach to take, but for now, I think it’s too early for us to decide we don’t want to do this. This is just the wrong approach. I think we need to try, but we also need to remind everybody that LLMs are not here to replace anybody in the legal domain. They are here to assist. So even if an LLM can basically give you the complete analysis of a case and make like a judgment even on a case, is that a bad thing? I’m not so sure, because maybe that judgment can be reviewed by an actual judge. Find the pitfalls, find the issues, find the missing stuff and fix them. Instead of having to do all of the work from the beginning on their own. Because at the end of the day, I mean, you want justice for all quickly, and that’s basically where we’re trying to help, right? It’s especially in a country like Lebanon and the people sitting in jails for years before even being judged, if we can even help just there. I think that would be already a huge benefit for us.
NN
Yeah, massive. And I think everyone probably now knows the story about the lawyer in America who had prepared a case using an LLM and took it to court and it turned out it had been a hallucination or a fabrication. So absolutely, you can’t replace the humans. And regardless of the outputs, they still have to do the work of checking it and doing the creative side of things, of maybe finding pit holes in it.
AM
Absolutely. There was another case in Canada not too long ago as well, and the lawyer had to pay to pay a huge fine.
NN
Yeah. So, some cautionary tales there already. You mentioned ChatGPT just then. And we know that putting anything into ChatGPT or basically any other external LLM at this stage is kind of similar to putting it into the public realm. The companies that own these models may then use the data to train the LLM, and it could appear in a response to a user query at a later date. So lawyers should obviously not be putting confidential client data into these systems to produce legal advice. So the question is, under what circumstances could people use LLMs to process confidential data and what confidentiality protections would need to be built into the system?
AM
So my personal opinion is that anything confidential cannot go online. Because as much as you would like to claim that you are safe, there’s always a loophole somewhere that we just haven’t seen yet. And the way we approach this, or we’re trying to approach this at Siren, is that we want all of the work to be offline. And yes, this implies that there is some cost associated with hosting LLMs offline and having to have the hardware and the infrastructure. But the beauty of what’s happening today is that there’s a lot of researchers, a lot of people that are trying to democratize access to LLMs. We are no longer talking about millions of dollars for hosting LLMs. We are literally talking about a few thousand dollars today, and you can host your own LLM on your own machine. And I think this is just going to keep evolving. It’s just going to keep getting cheaper. And eventually everybody will be able to host, even maybe on your phone, you’ll be able to host an LLM. I would not be surprised if this is where we are a few weeks from today.
NN
That’s mad. So is that due to the cost of high-powered computers reducing.
AM
It’s twofold. So one is the cost reducing. Two is the competition – three fold even – two is the competition. There’s a lot of people competing to have the best LLMs. Three is that we now know what works for LLMs. We know what’s the underlying architecture and algorithms. So now people are actually making those even better. One of the fields that I work on is optimizing algorithms and computational complexity, which is basically how can we make sure that we make things even faster and consume less resources without affecting the quality too much. There’s a lot of effort going in that direction today. So I honestly would not be surprised if maybe a month or two from today we would have LLMs that can fit almost anywhere.
NN
So in terms of the confidentiality protections for a legal professional, that would mean basically installing one of these LLMs on their computer and making sure that the data stays on there on their local servers, then.
AM
Absolutely. I think that’s the only way to guarantee safety. As safe as we like to claim we are, once our data is on the net, it is not safe anymore.
NN
So, algorithms are instructions that order computers to take a pre specified set of tasks or inputs, and kind of process them within a particular order, right? Yeah, we call it a recipe. Okay. A recipe easier that makes them suitable for automating process driven tasks. Uh, so we’re talking about AI doing the research, analysis, synthesizing text and summarizing data in the legal field. You might expect these tasks to be typically done by junior lawyers, paralegals, graduates. So if AI is taking care of these rote tasks, do you see a risk that law firms would reduce their hiring for more junior or less specialized roles?
AM
No, no, no and no. And I keep saying that and I keep trying to shout it out loud. I don’t think LLMs are going to take anybody’s job. I just think they’re going to elevate the quality of what everybody is doing. And I think like if people are not willing to be in this mindset, it just means they’re too lazy. They just want to keep doing things like they’ve been doing them. They don’t want to change. They don’t want to evolve. I honestly don’t think LLMs, at least today, can replace anybody. I just think they’re gonna make us more efficient at the things that we want to do, and they’re going to help us achieve better quality in almost everything, especially in the legal domain. Because now, in a click of a button, I can know all the laws that are related to a certain case that used to be month of work. So that means that now I can focus my research more in depth, not just on fighting laws, but on figuring out how they can maybe come together, or if one contradicts the other, or if there’s something deeper that I need to understand. So I really think for now they are helping us get rid of the repetitive tasks and elevate the quality of everything else.
NN
Right. And you can imagine this would also bring changes to the way law is taught at undergraduate level as well. If LLMs are becoming so commonplace, then there’s kind of less need for universities to teach someone tasks that, you know, involve that sort of drudgery and bookwork, which is obviously an important way of teaching and learning. But if their time is freed up to then focus on the more kind of complex stuff. You can imagine them entering at a kind of higher standard otherwise.
AM
Absolutely. And now the teaching domain is a little bit tricky today. We still haven’t figured out the best way to approach LLMs in the education sector. I’m a teacher. That’s my one of my biggest passions. So the way I do it in class today is I allow them to use ChatGPT for everything. But basically, I make sure that whatever I want them to do is not something they can find an answer for on GPT or on any LLMs for that matter. So again, I think fighting the change is useless. We need to find the best ways to basically incorporate LLMs into everything. And the education sector is definitely one where LLMs are going to do a lot of change, because I can already hear a lot of professors complaining about students or asking GPT for the solution, yada yada yada. Well, that’s your problem, right? I mean, you know, GPT is out there. You have to design your assignments better. You can just sit there and complain, right? You have to do something about it.
NN
You mentioned a moment ago that LLMs are going to be democratizing legal expertise. What do you mean by that?
AM
So I’m not a legal expert, but I was working with a lot of legal documents recently for, for the obvious reasons. And I couldn’t understand a lot of the things that I was reading because it was just too technical and it was too kind of domain specific. So I literally just asked the LLMs to simplify it, to tell me what it just saw, but give it to me in slang without losing the true meaning. And now some legal experts are going to say this is not possible. But I mean, maybe it’s not possible to capture all the intricacies, but at least you can explain to everybody what is at the heart of a certain law without using a lot of technical words. And I think this alone is a big, big change. We’re planning on releasing a public legal assistant that will literally explain to the public anything related to a certain case they’re involved in in layman terms, without having to be a legal expert to understand. And I think, I mean, we claim we want to be a society governed by laws and justice, and but how do you do that if people don’t even understand the laws, right.
NN
So important. And even if you don’t have, like, a legal case coming up, which hopefully, hopefully most of us don’t, but, uh, you know, with the amount of misinformation and disinformation out there right now, it’s just, you know, a lot of the time you can fact check what people are saying against the laws, and you can imagine that this is something that could be very useful within conflict situations where people are fighting over a narrative of what’s right and wrong.
AM
You bring up a very interesting point. I forgot to mention that another tool we’re working on is legal checking. Basically, now I can check the legality of statements made by everybody, by just feeding in their speech, their text, their article. And I can literally ask the LLM to tell me if everything they said is legal or not. And the answer will be back in a few seconds. I think this is going to force people into more transparency and to being better law abiding citizens. And I think there’s a lot of things to gain.
NN
Right, with a tool like that at people’s fingertips, it should pressure people in the public realm to be more accountable for what they say and you know, and do it in real time. This problem is particularly important, as well, with the way algorithms are designed on social media to push a particular kind of controversial, but necessarily factually true content to users. And so, having a tool like that is incredibly valuable. Part of this aspect of democratization. Um, I wonder, does it assume a certain level of digital literacy amongst users about potential pitfalls of LLMs? We touched on the need for human supervision, but how important is it for users to understand how LLMs operate, to understand the underlying algorithms? And you know, what sort of training would you say people should have in order to be able to use LLM responsibly and safely?
AM
This is a complicated question. And honestly, I don’t think anybody has a complete answer just yet because this is evolving so quickly. We are basically discovering new things almost every day. So I think people in positions of responsibility cannot use LLMs without being fully trained on what they are, what they do, what are the biases inside of them, what are the risks, etc. etc. etc..
People in less critical positions – a software developer developing a piece of software asks the LLM to give them a piece of code. Checking if that works or not is going to come naturally, so you don’t really care about understanding what the LLM is doing. You can just test what it outputs and check if it works or not, and then you’re done. So that’s kind of a second level of users for LMS.
And then you have the public or the general public that is not necessarily technologically savvy. There you cannot ask them for anything. It’s our responsibility to make sure that what we give them minimizes risk as much as possible. There’s always going to be risk, and I’m aware of that. But I’d rather take risks and make change than just sit there and complain.
NN
Right. So this is where that AI safety debate comes in. And whether or not you have an open source model and you allow people to do whatever they want with that, or you build the safety mechanisms with into it; or on the other side, you can kind of actually tackle the use cases address those.
AM
I think there’s a nice kind of related example here. So there’s a series that I watch that I love a lot. It’s, it’s House M.D. So it’s about doctors and hospitals and there’s always this debate of like the patient needs to make an informed decision. But how informed can that decision be if you don’t have medical training? Like, what the heck do you know about what’s happening to you other than what the doctor said you should do? And you’re usually going to follow that, right? And I think it’s kind of almost the same problem here. Right? The doctor seems like an authority figure that you need to follow. And this is what most people do. And when there’s a screw up, people want to always blame the doctor. Right? But the doctor’s intention was never to hurt. He was literally trying his best. I don’t know of a doctor who wants to kill patients. Hopefully not. Right. And I think it’s the same. Like, I like to think of it like it’s the same thing that’s happening in the, in this I era. I don’t think I engineers want to hurt people. But it comes with its own risks. Right. Some bad things are going to happen. But we need to learn from them, fix them and move on.
NN
And I think a lot of it comes down to thinking ahead about the ways in which an LLM could be misused and then like tackling that and sort of whether that’s through legislation or public awareness or whatever it might be.
AM
And that’s one of the reasons why we’re not doing this work alone. We have lawyers, we have judges, we have, policy makers that are actually overseeing the work that we’re doing because nobody can in this day and age, technical AI engineers can no longer work alone. You have to work with all the parties involved, with all the potential people affected by the software you’re doing. Everybody has to be involved from the get go. You cannot just do something and then hope for the best. You need to involve as many people as you can from the get go, to make sure you foresee as many of the potential pitfalls before you get there.
NN
That again points to another value of the open source model, where you have a wider pool of people who are able to examine and look under the hood and see if anything could go wrong.
So, of course, what LLMS are able to put out depends on what they were trained on, and them becoming more humanlike and accurate depends on how much data they’re trained on. What role should the state play in making data available so people can develop new programs to tackle issues of public interest?
AM
Basically, we think of LLMs as being like very close to humans, but not a lot of people know that. But the first time LLMs were trained without adding any kind of forced intervention they were racist, sexist, agist and it’s sad, but I think it reflects the reality of who we are on the internet. But then like LLMs were forced to be kind of decent citizens by users, by AI engineers because they realized this was not okay. So data, I think, is no longer a big issue today. As I said before, where a lot of people are democratizing access to data, even a lot of companies are open sourcing their training data for their own LLMs. So that is no longer a big issue, I think, where the state can be extremely helpful – I usually don’t like the state to be involved in any of these things, because I think they slow things down instead of helping progress – but they could definitely learn from how LLMs have evolved and where we got by basically digitizing everything, all the processes that they are involved in and making sure that they can learn from them. I’m thinking of like governments, like Lebanon. I’m thinking about governments in what we call the Third world, because I think LLMs can help a lot there. But we have to start by digitizing. I think there’s a lot to be done before we even get to LLMs, right? I mean, in Lebanon, I mean, even like some of the most basic things you want to do are still on pen and paper, but hopefully, hopefully some we will elect people that will see what needs to change and we’ll start working in that direction. But I mean, it’s a long road ahead, especially for countries like ours and other countries are, are basically they now have ministries for AI, right? I mean, we still do things on pen and paper, unfortunately. So it’s going to take a while but I’m hopeful.
NN
Yeah. So digitizing the processes or another thing that you know, is related to this is the kind of paucity of Arabic language data online as well. People training LLMs face difficulty accessing high quality data in Arabic or text in Arabic, rather than the dross on the internet that people have on user forums. Right? But how can one overcome that challenge to get high quality Arabic language text or any other language where there isn’t already a large volume of it online? How can you access the kind of texts and then train models that are able to process that language?
AM
You have to partner up with the right people. And I think in this domain, the right people are the newspapers. Because they have data going back years and years and years, and it’s clean and it’s been sanitized and it’s been indexed properly. And I’ll give an example, because he’s a good friend of mine, and he was actually so open with everything he had. It was it was Ahmad Salman from Al Safir newspaper. I don’t know if you know it but they shut down. But he basically gave us access to his whole archive so that we can actually train to have better Arabic understanding. I think that’s it. You need to partner up with people that already have quality data. And I don’t think it’s impossible. It’s just that unfortunately, we end up competing more than collaborating. But I think at this point the time is to collaborate.
NN
Absolutely. And if we take a regional look at the state of AI in the Middle East, West Asia region, there’s been a lot of news recently about major US players investing heavily in AI companies in the Gulf, and the US seeing these kinds of partnerships as key for pushing the region away from China’s orbit. To give an example, Microsoft recently announced an investment of $1.5 billion in G42, a leading company in the UAE. This company has recently created Jace, which is an open source LLM that it says is the world’s most advanced in Arabic. What do you predict will come out of these partnerships? And can they benefit smaller tech companies and entrepreneurs in the region?
AM
I don’t like those partnerships because I think they are based on the wrong things. And let me let me be very clear. So the US, especially the US, like I don’t like this two faced approach to things. So on one side, the US wants to put money in some local companies. On the other side, they don’t allow us to buy GPUs because they’re afraid of how we’re going to use them. Right. So I already have a problem there. It’s either you want to help us do the right science and research, or you don’t. Just pick a lane. I’m not going to talk politics, but I have a huge problem with that, and just investing in select companies in countries that are like known to be kind of more than allies. I’m not a big fan of that. And that’s why I love open source. I love open data. I love open source. It doesn’t matter who has the money. Everybody can do it right now. Yes, infrastructure is still expensive, but that also is being worked on. Eventually we can connect 100 computers and have the equivalent of a GPU. Right. But I don’t like the fact that it’s becoming more of a race than a collaboration to get the best out of AI, and this pisses me off. I think like OpenAI, even OpenAI started like AI for the good of humanity, and now it’s closed AI. We have no idea what the heck they’re doing. But other people picked up the torch and are doing exactly what OpenAI was supposed to be doing.
NN
All right, so if we were to move away from the politics a bit and you could give me a sort of overview of where you see the state of AI in the region, what would your assessment be?
AM
I don’t know. I hope it’s going to be more and more collaborations to basically get the best out of AI, because developing a legal AI tool can literally be put at the service of everybody. LLMs and the current state of AI has made things so easy to cross languages, cross cultures – like you just have to design things once, they will work across dialects, languages. Even if you have specificities inside societies, LLMs can pick those up. I think it would be a shame if we if it goes towards competition and races rather than collaboration, to get unified tools that work for the for the service of the most, the biggest possible number of people. I know companies want to make money and they should keep doing that for sure. But I think there should be somebody leading the way for like AI for the public good in the region.
NN
Yeah, I think you hit the nail on the head there, really. And collaboration being so important in actually creating conversation about what people want collectively as a shared vision for AI and how it can be used rather than its development and deployment being decided on by a handful of companies that want to make money, you know, you know, far away. So, um, yeah, I think on that note, uh, probably could wrap it up. I have another question. It’s like it doesn’t really flow from that, but I’m going to ask it anyway. You come from an academic background. You are an academic and you’ve recently been working at Siren, a consultancy. I want to ask and understand how you feel that transition has been, of the blending of the two worlds.
AM
It’s been hard. Let me put it that way because, usually I’m used to working with master’s students, PhD students, small groups of 2 to 3, four people at best. And now I’m working with teams of tens and 20s and 30s and 50s. Um, everything we’re doing is new to everybody, including myself. Because, like, like this revolution started a year or two years ago. We don’t even have courses in most universities teaching those things, which is something I started doing now. I’m introducing a course at AUB about LMS and rack systems because like, as of today, nobody teaches that. So it’s hard. You need passionate people that are really eager to, to basically get on that wave of AI revolutionizing the tech industry. But it’s hard at the same time it’s a lot of fun. So so for me, it’s exactly where I want to be. I just I don’t like being only in my office doing theory and theoretical research. I also would love to be doing some tech for social good. Brilliant.
NN
So blending the theory and praxis in a socially beneficial way. And some really exciting projects ongoing and in the pipeline. Thank you very much for joining us. Hopefully have you back another time soon.