Curious about how AI is shaping legal services? Dr Jane Hogan’s research looks at the regulation of AI generally, including AI used in legal services, with a focus on ensuring that the regulation supports those uses of AI that will improve our society and our daily lives.

Key Presenter:  

Dr Jane Hogan, UTS Faculty of Law

 


 

Justice talks AI legal services video thumbnail

Dr Jane Hogan is an early career researcher and lecturer, an experienced intellectual property lawyer, a knowledge manager, and an experienced organisational leader. After many years in legal practice as a patent litigator and in management roles in commercial law firms, Jane chose to undertake a PhD at UTS as a Quentin Bryce Scholar in order to contribute more broadly to a fair and just society.

Transcript:

And welcome everyone to our second Justice Talk for Spring.

It's a real pleasure to see you all here and to be joining us, too, for an event that's being run in conjunction with the excellent series of activities and programing as part of the faculties, law, Tech and Social Justice Week.

Tonight, we are hearing from one of our faculty members, Dr. Jane Hogan, who is a permanent member of academic staff here at UTS and an expert in IP and regulation of AI. A really warm welcome to you Jane. Tonight, Jane will be presenting on the topic of A.I. Legal Services and asking the question, Is our regulation ready to support the use of AI to improve access to justice?

My name is Associate Professor at the Vogle. I am the faculty co-director of the Brennan Justice and Leadership Program. As many of you know, and I'm really delighted that everyone is able to join for some of you at the height of assessment season and after such a beautiful day, we have really an amazing list of people attending today.

And before we get going, I did want to acknowledge a few members of our Brennan Justice team and LSS teams who have joined us. We have Eva Ossowski, who is the VP of Social Justice from the Law Student Society. We have our acting dean, Tracey Booth, who is joining us this evening. Bec Keen, our Student Programs Officer, and Monica Reed our Academic Services Manager. And a few other members of the Brennan Justice and Leadership Committee. And most importantly, of course, our speaker for tonight, Dr. Jane Hogan.

Next slide, please.

Before we launch into official proceedings for this evening, I did want to take a moment to acknowledge that the UTS faculty of Law is situated on Unceded land of the Gadigal People and that we pay our respects to Elders both past and present. Acknowledging them as the traditional custodians for this land. And we acknowledge that these lands have always been places of law and today exist in plural legal worlds.

I'm also zooming in from the unceded land of the Gadigal people, and for those of you who are zooming in from other places or indeed Gadigal land, I warmly encourage you to look where you are sitting in from tonight. In the chat. Thanks so much, Jane. Next slide.

It is now my pleasure to more formally introduce you and give everyone a little bit of a sense of Jane's incredible work and her background before she begins speaking to us on her particular topic for today. Dr. Jane Hogan is an early career researcher and lecturer, an experienced intellectual property lawyer, a knowledge manager and an experienced organizational leader.

After many years in legal practice as a patent litigator and in management roles in commercial law firms. Jane chose to undertake a PhD here at UTS with us as a Quentin Bryce scholar, which was also the way I undertook my Ph.D. here at UTS as a Quentin Bryce scholar. And she did that to contribute to a more fair and just society.

Jane's research looks at the regulation of A.I generally including A.I used in legal services, with a focus on ensuring that the regulation supports those uses of AI that will improve our society and our daily lives.

Before I hand over to Jane, I want to go over a little bit of lots of housekeeping that all of you will be hopefully very familiar with by now. Just to let you all know that this event is being recorded, but only for teaching and learning purposes and only those of you who speak and turn your camera on will be included in the recording. There will be time for questions at the end. And Eva, who is joining us as one of your LSS VPs, will be very ably hosting the Q&A and really ready to receive your questions. And we're here to encourage you to participate after Jane's talk. You're welcome to pop some questions in the chat as we go through the talk and as they occur to you. Otherwise, we will have some time. And if you'd like to, you can pop on your cameras and ask those questions in person in the Q&A session. And we'd also encourage you to put your cameras on. We love to see your faces and it's lovely to see how many members of the Brennan justice community are joining us here tonight for this important topic.

Last but not least, your RJ points will automatically accrue by way of registering for this talk. If your name is something different to your name as registered under CarrersHub or you're zooming in from somewhere else, please just send Bec Keen a private message in the chat with your CareersHub. The name on Carrershub and Bette will be able to ensure that your RJ points are accrued.

And so without too much more from me, it's a real pleasure to introduce Jane and hand over to you for this exceptionally popular event. Jane I think students are really interested in this topic, as is the entire academic community at this particular point, and so we're looking forward to hearing from you. Thank you.

Thanks, Anthea.

So I thought I'd talk today about the research that I undertook for my PhD, and these are the things that we're going to cover. So I wanted you to understand how I got to my research question. I wanted you to understand how I framed the sort of problem itself, particularly for my students in the sort of the just tip of my class. And then I wanted you to sort of think about understand sort of my reasoning as to why I think the sort of current law needs some reform if we're going to sort of reap the benefits of A.I

So when I started so and Anthea has given you sort of a good understanding of my background. So I've done quite a lot of different things in practice and both in management. And my interest started to be piqued in relation to AI when people started to come to try and sell me tools in my sort of role as a manager in a law firm. And they were AI based tools. And I started to think this is something very different to what I've been saying before in legal services. And so I thought I wanted to do some more work around what these tools were, where they were being used, and how we could make sure that they were going to be used for the benefit of society and not just for making profits.

So I started some of my research thinking about just where A.I tools were used in law, and I sort of discovered very quickly, this is even before so this is even before GenAI became seeing that they were being used all over the place. Legislation, judicial decision making, administrative decision making, etc. And increasingly they were being used by lawyers. What I was interested in though, was this idea that they could be sort of helpful for citizens who might not have access to a lawyer because of the expense. So I wanted to look at the regulatory model that applied to that. And so that inevitably led me to sort of looking at the regulatory model that applied to lawyers.

And you may or may not be aware, but there are sort of two different regulatory models that apply to lawyers depending on the jurisdiction. So there's been two attempts to sort of produce a sort of uniform law, but they've failed. So I decided that I would look at the later regulatory model, the uniform law, and ask myself the question, is legal services regulation ready for AI?

That question has got sort of two big concepts in it. So one is this sort of concept of readiness. So what did I mean my readiness and what did I mean for AI and my motivation for doing the research was to make sure that we can have these tools used for sort of a socially beneficial way. So you'll save my students from just tech will say that I'm fairly neutral as to whether tools technology is good or bad. I think it can be good. But I also had this whole idea that the regulation would be important if we were going to get sort of good AI as opposed to sort of, you know, mediocre AI or harmful AI. So I certainly sort of see AI the development of AI in conversation with society.

So readiness, I sort of set myself this test of readiness. What did I mean? And I was looking at the existing regulatory model to see if it was ready for AI. And I developed three different criteria by which I would sort of judge the existing regulatory model. So I certainly didn't expect the existing registry model to be perfect and to cover all the issues. But there were some things which I thought it would be really useful for the existing regulatory model to do. So the first one was what I called regulatory readiness, and what I was looking there for was, was there a good regulatory model that could be used as a base to build on? So I was really thinking about sort of regulation law as a form of regulation with a purpose, which I'll sort of talk to in a minute. And I was looking for a solid sort of base.

My next concept was conceptual readiness, which and I sort of conceptualizing sort of law there not as regulation but as an activity, something that we do. And in particular, I was conceptualizing some law as law reform, as an activity that sort of follows its own logic. So I wanted to understand what that logic was in relation to legal services regulation, because I also used a sort of a theorist practice theory which suggests that we'll continue to use sort of same logic in each sort of reform. And then I've got this had this test of present readiness, by which I don't mean the precedents that we would normally sort of refer to, but also explain what I meant by that.

So when I was looking at regulatory readiness, the test that I set for the uniform law was one of purpose. So what should I expect the law to do? So I used the work of Brunswick and I converted it as a sort of test, which would sort of apply to AI. So I asked, would the existing law steer the use of AI in a manner that provides legal assistance so that it protects the institution of law as an important part of social life, promotes access to justice as an important adjunct of this idea that we're sort of governed by the rule of law, protect from fundamental legal services, values of loyalty, confidentiality and competence, and also provide an appropriate balance between sort of the competing sort of players. So it might be sort of new players in the legal services market or legacy players. So I was looking for a model that did those things.

The conceptual readiness. What I was looking for was this idea that our process of reform and the ideas that we've got about sort of legal services and what legal services governance covers are sort of fairly sound. So I was looking for a clear understanding that we understand what we're regulating, a clear some sort of indication that the logic that underpins our past reforms, as well as the logic that therefore underpins our existing law, really understands the regulatory implications of any past changes and then has some really good arguments as to why this is the best approach. So so there's sort of fairly straightforward sort of areas that I sort of looking for.

The third area that I was looking for was some sort of precedent for understanding how to sort of break what I've called the regulatory problem of non-lawyers. So it's looking to the law to say how will it dealt with non-lawyers when they were in an influential position in the labor context? And it was this concept that I was looking to see how well the law could adapt to a circumstance where non-lawyers are really important, when historically the law, an individual status as a lawyer was really important. It was important because it brought with them an ethical sense. It was also important because lawyers have unique skills to sort of produce good quality work. And the non-lawyers this idea that in sort of the historical context, non-lawyers had only participated in a fairly subordinate role, and it might seem odd for me to be sort of asking for the existing legislation to actually sort of solve this problem in some sort of way.

But when I was looking at the existing legislation, it seemed to me that we already had a regulatory problem of non-lawyers. So I thought and I thought, I will create a breakthrough problem on words, but we already have one. And to just explain how that arises, when I looked through some of the existing legislation, there's actually a great sort of multiplicity in who is unable to sort of provide legal services, who's an authorized legal services provider. And so we've got this sort of the historical model, which is the things the individuals on the left, which is what I called sort of the traditional law centered model. We've got community legal centers which are also sort of authorized legal service providers. And in the middle we've got some new authorized legal service providers who emerged in the 1980s with this sort of rise of an economic approach to law and in incorporated legal practices, which is a corporation who's permitted to provide legal services in a uniform jurisdiction. They are a non-lawyer themselves. And I thought this would give us a good if we if there's a good model for incorporated legal practices, it might give us a good model for dealing with sort of the future regulatory problem of non-lawyers that are associated with I.

So when I was looking at specifically how the uniform law applied to the use of AI, I specifically looked at corporations to understand was there a solution to this regulatory problem was done was which we could reuse one method of looking to regulate. I

so having sort of got my sort of concept of readiness down on next had to sort of consider what I meant by and it won't surprise the sort of students that are doing just tech with me that I took what a social technical view of AI. So rather than just focus on how AI is defined in the literature, I sort of focused on AI as having sort of two sort of important elements. So one element was this idea of AI as a product. So the technology itself, so software with particular characteristics and I did a lot of work even before of Gen I turned out to figure out what those characteristics were. And then it's also a field of human endeavor. So it's a series of practices which is applied to sort of creating sort of particular form of software. And I thought if we're going to understand how useful the regulatory model is, we need to actually understand both of those.

When I looked at the practices, it seemed to me that we could sort of divide the practices into sort of three different groups, practices of production, practices of distribution and practices of use, and that the area that Legal Services regulation had historically focused on was these practices of production. So we control the practices of production by controlling who can be a lawyer, who can be an authorized legal service provider. And that would be therefore where I sort of focus lots of attention. What did the practices of production of AI look like, if it's usable again, sort of using sort of the idea of sort of video that relating to particular practices, having a particular logic. I also thought what logic will sort of follow these practices of production will really vary depending on who is involved. So I thought the logic of the practice of someone like Oxley, who sort of really sort of deeply embedded in sort of providing sort of free legal services, who's got a good understanding about how sort of the law works, has got a good focus in this sort of AI development about making sure it's very verifiable will be quite different to say, the logic of work or something like that within sort of the legal services. So I sort of focused on that sort of as my sort of conceptual framework.

From there I asked myself if I sort of take this concept of products and practices, what actually changes if I starts to be used to provide legal services from there, from the work that I did in relation to understanding AI as a product, the sort of particular sort of areas of software, I sort of could work out that there were actually some quite interesting sort of parallels to the legal services regulation. So along the top it's very typical sort of model of AI that it has particular purpose, that it perceives information from its environment and then it processes these things and sort of creates an output. When you look at those things, the purpose actually they actually map to the traditional values of legal services regulation. So the purpose is actually about sort of the duty of loyalty. So whose purpose is important and perception is about what information is taken into the environment. So it sort of raises questions about the duty of confidence, but raises other issues as well. So these aren't the only other issues. And the idea that sort of it produces some sort of legal output raises questions of competence. So sort of fairly typical legal services values. But it also raises the important question of sort of a duty to the court, because if these it's not like just a purely private transaction, if these if legal assistance is provided in a way which is an effective, it will have a knock on effect to the court system as well as potentially a knock on effect of the rule of law. So I thought it's still important to maintain legal services value as a matter of principle, but also the nature of AI is really important for ensuring that we sort of keep and maintain those values.

I then sort of thought, what will the practices look like in relation to AI? So the assumption even though there are sort of a multiplicity of authorized legal service providers enabled in the legislation, the assumption is that sort of current state looks something like this. It's a 1 to 1 model where we have one lawyer providing legal services to one client. It's governed by sort of a lawyer client relationship. And the assumption is that that will then sort of give rise to the imposition on lawyers of our ethical code. So we're sort of bound by some ethics lawyer Client relationship also gives rise to special legal obligations. So it's not really that we've got ethical obligations, but there's sort of legal obligations and that was secured by through past regulatory model. And it's a human to human interaction.

When I thought about what it would look like if AI is being used, I thought it's going to look very different. So the first thing I thought is like, we've got a system here, so we've got the AI system. We might have a corporation. So in my hypothetical, we've got a corporation acting as the deployer, and so they're the ones behind the system, not a lawyer. And then actually behind the corporation as the deployer, we might actually have a lot of people who've contributed to sort of the AI system. And in fact the corporation want have no influence over the air system, but might be merely sort of providing sort of a white label service. So on the one side, we've got sort of an a complex AI development chain.

On the other side we've got this our system that might serve multiple clients. So it's not just one client. So when I sort of did legal advice for my clients, sometimes I might sort of take a passage or something like that out of the past legal advice, but it was always a new piece of advice and there's always an individual client in this model. The one system might sort of serve up similar advice to many clients. And so I thought, well, what what is the significance of these things? And I thought there were sort of some really important things. So I thought firstly, when I was being used, we've got this sort of radical shift from sort of a 1 to 1 model of lawyer to client relationship to many to many model. We've also potentially got a radical shift between as to who's responsible for legal services and the actual provider of the legal services might not be the one who's responsible for creating the right quality of work in the tool, because it might be the AI contributors on sort of the left hand side of the of the screen, who are the ones who are really responsible for the tool. It really create sort of a new emphasis on multi-disciplinary, narrative, multi-disciplinary and create this possibility of law without lawyers. So it creates sort of really different sort of regulatory problem for us and creates, you know, what I've called the regulatory problem of non-lawyers. It potentially brings into sort of legal services either through the types of people who enter the legal services market or through existing firms using sort of the same tactics in other areas, what I call the logic of digital capitalism. So I sort of looked at that and to my mind it's sort of largely exploitative logic, which is not necessarily sort of consistent with the legal ethic. It creates a sort of shift from this human to human interaction to human computer interaction, which raises lots of questions about what's permissible. Are patterns of design permissible? Is that consistent? If we had sort of a duty of loyalty, is fully automated legal services permissible, or should there be constraints on it? And it also obviously provides the ability to provide legal assistance at scale.

I looked at lots of the regulatory implications of this particular shift and there are lots so there are there is the potential for it to raise questions of sort of competition, which we haven't sort of traditionally seen in the legal area. This critical questions about who owns which bits of the AI system, who's got access to the data, who owns the data, etc.. So there are lots of new issues that are raised by this. But the thing which concerned me the most was this idea of scale. And when I thought about the regulatory implications of scale, I really thought that it's a double edged sword for us. So on the one hand, the potential for there to be really a good scale in providing legal assistance is that it does give us the opportunity to use these tools to improve access to justice. So we know that there's an enormous amount of unmet legal need. And so these tools might be one way, not the only way of actually sort of addressing that whilst they might give us the potential to improve access to justice, there's also a real risk that if there's a single a flaw in a single tool, that that flaw will affect people at scale. So, you know, in my career, maybe I've sort of advised 100 or 200 people. I sort of didn't have a lot of clients. But if an AI systems advising 100,000 people and it's giving them the wrong information about what to do in court, there are sort of fixed courts to the efficiency of courts or if it's distributing sort of misinformation and disinformation, there's a flow on effects to sort of how legitimate people think the administration of justice is and sort of how confident they are in the rule of law.

So I thought that this scale creates what I called a new calculus of risk. So in the regulation of lawyers, we've always had very sort of proactive ex-ante regulation. And I thought the scale of it, I means that it's actually really important for our regulation to be sort of proactive, precautionary and preventing these impacts before they occur. So I thought that that was really important. And these risks are real. So when I started my solo thesis, Jen, I was in a thing. But Damon Charlton is mapping the different examples of AI hallucination around the world. And we can see this like hundreds of these cases around the world, fabrications, misrepresentations, etc., and actually being used by multiple people. So it's self-represented litigants using AI. I sort of submit work lawyers, even judges. And fortunately for me, there's a very recent case where sort of the Court of Appeal talked about this. So a self-represented litigant had decided that they would use AI to help them with their submissions in a sort of complex trust case. And unfortunately, the submissions turned up cases that didn't exist, cases that did exist but were cited for sort of the wrong particular authority. They addressed issues that were not issues in the sort of the proceedings, etc. and for the Court, because the court has to give reasons, it had to unpick all of that. And so sort of Justice Paine said it provided a really good example of how I, particularly when it's used by a non legally trained user, might actually add to the cost and complexity of legal proceedings without any appreciable appreciable benefit. So these are real risks that sort of need to be sort of addressed. So that was one of the key things that came out of my sort of analysis of what would what does the world look like if I used flawed legal systems?

Excuse me for a second.

So then I turned to sort of look at what does the informal law do? And it's main assumption is that we've got this lawyer client relationship. But that's not what I was interested in. I was interested in this scenario where it's the corporation which in the uniform law is in corporate legal practice provides legal assistance directly to the client. So I sort of looked at that sort of regulatory model, and when I started to look at it, I found it a little bit perplexing. It didn't make a lot of sense to me. And so I sort of dug back through sort of the history of how this sort of regulatory model emerged. And what I found in the history was that at the point of time when we're sort of thinking about enabling corporation, it's to be a legal service provider, there was a big debate about how we should regulate these sort of entities. So on one side of the debate was an argument that this is an entity that's an authorized legal service provider, and we were all to regulate the entity as a whole by imposing sort of the same legal obligations on the entity that imposed on or on lawyers and the sort of first corporation off the block, if you like, was a sister corporation where they were deemed to be like a solicitor. And so that was subject to the same kinds of regulations as we are, including subject to oversight by the court and oversight by the sort of statutory regulators.

The alternative approach was to say, we're not going to go down this path of entity regulation. We're going to do what I call regulation by lawyer. And this is a form of regulation where you want to control the behavior of a third party. But instead of directly regulating the third party, you regulate a lawyer to do so. And when I looked at where this particular strategy came from, it seems to be a strategy that was originally applied to get over the problems of the Law Society having limited power over non-lawyers. So if you wanted to control, say, a negligent clerk and not want the negligent clock to be able to sort of practice in a law firm, you could direct always not to regulate, not to employ this clock, so called it regulation by lawyer and contrasted it to the regulation that applies to us. The regulation of where. So there was this debate as to which way we should go. And in the end this regulation by lawyer strategy won out. So it went up from New South Wales, was the leader in this process and it went up. I'll explain the reasons why, but one out of 2000 and then this strategy was sort of picked up in the various subsequent reforms, including in the uniform law. And the essence of it is that instead of binding the corporation to act ethically, you bind, you require the corporation to have a solicitor as a director, and you require that solicitor to ensure that the corporation acts ethically.

So I thought that those with immediate problems with that strategy even sort of setting aside sort of the question of I thought it was unnecessarily complex. I thought it makes the solicitor director responsible for the conduct of the ILP in the context where they're not in control. So another big debate about that was corporations at risk in this period was whether or not they should be controlled by lawyers and owned by lawyers, or whether or not non-lawyers could own and control them. And because of the influence office of the competition and consumer sort of push it was held that it non-lawyers can control them. So although the corporation's ability to practice is dependent upon a lawyer being in it, the corporation itself is not fully in the lawyer's control. It does nothing to ensure that the corporation's actions are aligned with the principles of legal ethics, creating sort of big challenge for the solicitor director to make sure the corporation acts ethically in circumstances where it's not bound to. And it also creates sort of problems for the solicitor director, ensuring the non-lawyer directors who are not bound by the same principles of legal ethics actually sort of follow the sort of the directors directions.

The other thing which I sort of uncovered was that this strategy, because it's so focused on sort of the solicitor and placing the solicitor directly in the corporation, doesn't look at what the nature of the relationship is between the corporation and the client, so that there's some uncertainty as to if you're a client of an I'll pay you protected by the same legal rights and remedies as you would have if you were going directly to a lawyer.

So why did we get this strategy? So I also looked at that and the justification for this sort of particular strategy was based on this sort of argument, statist function. So I felt like would simple have looked at the arguments justifying sort of the reservation of legal services to lawyers. And when I looked at the taxonomy that he had, the argument that justified this regulation by lawyer strategy was that lawyers have a particular status and so therefore they must, which requires them to act ethically. So therefore they must have an important function within the regulatory regime. And I actually thought that that was a problematic sort of piece of logic for a number of reasons. I thought it was a little bit circular. I thought it sort of assumes that this historically contingent fact that only lawyers were bound by the principles of legal ethics or including to them was some sort of law of nature. It didn't sort of open up the possibility to, you know, inducting non-lawyers into the principles of legal ethics, and it fails to sort of take into account of alternative way of sort of thinking about how we should sort of regulate in some new areas. And that's that we could just go from function to status. So instead of looking at the status of lawyers and allocating a function to them, we could go, Well, what functions do these people play within the provision of legal services? And if they play an important function, maybe we should sort of regulate them. So the net effect of this logic is to kind of narrow down the the scope of tools which we might use to sort of regulate in a particular area.

So before I even got to looking at sort of the application of the uniform law to I, I had some sort of problems with the strategy when I went to look the way in which the uniform law would apply to. I was course looking at this scenario and to put it into the language of the uniform law. I was looking at the IAP here as a deployment of legal services, who's got sort of a solicitor director within the I'll pay. So I sort of understood the strategy and the logic of the strategy. I then needed to understand the legislation and before I could sort of get to understand the legislation, I sort of encountered not just problems of strategy, which I've already sort worked out the problems of execution. So even though the logic behind the uniform law is that we're going to separate out sort of the obligations of the corporation from the obligations lawyers and primary responsibility is going to sort of rest on the lawyers. Uniform law itself is very confused as to sort of who is sort of the authorized legal services provider and does include, in some circumstances, some entity regulations for the ILP in including some sort of entity regulation of the ILP. They use the concept of professional obligations in key parts of the regulatory model that apply to both law practices, which is the umbrella concept for organizations and lawyers. And in using this sort of term of professional obligations in that context, although we would naturally know what professional obligations mean when it's applied to lawyers, it creates a little bit of a conundrum as to what the legislation means by professional obligations when it's applied to a non-lawyer such as an I'll pay. So I spent a chapter. So looking at this sort of conundrum and came up with what I thought was sort of a viable sort of alternative, viable construction of the core obligations. And then there was also a problem with the strategy which they had applied in relation to directors. So one of the historical approaches was to try and solve the conflict, which a solicitor director might have between their ethical duties and their duties to the company, by declaring not that the ethical duties prevailed, but the legislation prevailed, there was a problematic strategy in and of itself in the uniform law. What they've done is only the duties that are explicitly set out as being excluded from the directors duties is being excluded can prevail. But the way in which they've done it, as I've focused particularly on pro-bono work and they've given not just the solicitor director but all directors some relief from the stringency of the directors duties if they want to provide pro-bono legal services. So that in itself is a good thing. So it means that they're not potentially in a position of conflict. So diverting resources to pro-bono services. The problem when I was looking at it from an eye perspective is that the way in which they have worded that particular exception is that it only applies if they're providing legal services through the use of Australian legal practitioners. So it doesn't look like it will apply if they are providing legal services through it. So some problems of execution.

So when I sort of sorted out the problems of execution, that enabled me to actually sort of figure out what does the regulatory model give us. So if we step through sort of each of the different sort of players, this is what it looks like. So any corporation can become an incorporated legal practice, provided they give notice of their intention to engage in legal practice. There's no ability to say no to that. So if provided they give the requisite notice, it's just a formality. So it's not like the things that lawyers have to do where we have to get admitted at all, that sort of thing. It's just an expression as required by sort of this regulation, by lawyer strategy. They have to have a single solicitor sitting on the board. They don't have to have a sister anywhere else in the corporation. They can use AI. So one of the things I looked at was whether there's an implicit argument that in enabling these corporations to provide legal services, that they have to provide them using lawyers. And when I looked at sort of the history of sort of the legislation, there was a lot of debate about multi-disciplinary back end sort of 19 92,000 with some of the question that the argument being we could have non-lawyers doing legal work under the supervision of this sister director and it will make it more efficient. So there's no requirement, there's no sort of internal reservation of legal work to lawyers within the corporation, which paves the way for corporation with just a single solicitor to be able to provide legal services through AI.

The IAP does have some obligations under the Corporations law, so it's got obligations in relation to trusts, legal costs and professional indemnity insurance, but it doesn't have to contribute to the fidelity Fund. So individual lawyers underwrite that. There's an explicit obligation which says that it's got to comply with its statutory obligations and on the construction that I put on the provision which deals with its professional obligations, it's not bound to comply with the principles of legal ethics. Well, it's not bound to comply generally with the principles of legal ethics. It is or and it's not generally bound by the solicitor conduct rules. It must comply with all the rules relating to conflicts. So it's got a negative obligation to avoid conflicts, but doesn't have the positive obligation, ethical obligation and rules to sort of act in the best interest of the client.

When we looked at that, when I looked at the sorts of directors obligations, the critical provision turns out to be section 34 one day, which has an ambiguity arising from this professional obligations. QUESTION This provision obliges the solicitor Director to take reasonable steps to ensure that the legal services provided by the IAP are used in accordance with the principles of legal ethics, even though the employer is not bound in practice. This is a lawyer in the loop strategy. So you know what I had on? I'd say this is a lawyer and the strategy which requires the solicitor director to do two things, which we're not necessarily sort of trying to do through our law degree. Firstly, it requires a system director to translate the principles of legal ethics into design requirements. And I've got some experience in doing that for myself, using management where I have implemented legal technologies. And when I try to translate these principles, core principles into design requirements, it's actually really hard. But the harder task is that they've got an obligation to take reasonable steps to not only ensure that the corporation itself is compliant with the legal ethics, but because the production process is so complex, everybody else in the legal team has to do it. So it's a fairly thin, it seems to me, sort of protection for fairly important sort of question.

Also ask us of question as to is there any part of the uniform law which would catch the sort of contributors? And I decided after much debate with myself that Section 39 probably does catch them a little bit on the construction that I prefer. So section 39 is a provision whose history was to try and insulate lawyers from non-lawyers. But in the sort of modern form, it's would be effective to sort of impose an obligation on these contributors to not cause or induce. This was a directive from breaching their sort of obligations, including their statutory obligations or the IAP, from actually conforming with the principles of legal ethics. So it's this weird sort of thing where the IAP is not bound, but a third parties are obliged not to cause or induce it not to comply with the principles of legal ethics. This is a significant sort of construction because it means that there is some regulation of the contributors and it does give the sort of statutory regulator some power over them. But it's a very contestable construction.

5 minutes to go Again. Things I look at the rights, the legal help seeker and I've got access to complaints and the status of what legal rights I've got. So what is the nature of the sort of relationship that's now mediated by I is unclear. And I think because it's never solved by the legislation, I also looked at sort of the powers because of the regulators, because because of my sort of assumption or my sort of thought that that will be a really sort of important part is to have proactive sort of regulation. And we've talked about this no ability to become to prevent a corporation from becoming an I'll pay this limited oversight over the I'll pay and there's potentially some oversight over the development chain plus or section 39. Similarly. Look, I looked at the power of courts, so potentially power over any lawyer within the entire development chain or any litigants who are relying on AI, but probably not very much more

so having done sort of that analysis, where did I sort of come out in sort of the readiness of the uniform law for AI? So I thought there were some positive aspects of it. It's good that it seeks to ensure that there are legal services. A really positive aspect is that it's open to new entrants to the legal services market. So to the extent that we need to have new investment and new kinds of corporations providing legal services, if we want to have to get the benefit of AI for access to justice, that's a sort of really important sort of part good part of the uniform law. I surprised that there might be some regulation of those in the AI development chain and there are some useful protections. But on the whole I thought it was not. It's a well-executed law and the approach to access to justice is poor. There's no objective to actually promote access to justice, and there's sort of an accidental barrier to using AI as a means to provide pro-bono legal services. I thought the protective strategies were inadequate and weak and the regulatory model is just not suitable because it's a very reactive model applied to corporations as opposed to the very proactive model applied to lawyers. So my conclusion was that it's not doesn't satisfy my test of regulatory readiness. A variety of reasons. It's not hasn't got very good strategies, fundamental values, legal services, values aren't well protected. It doesn't look like it will protect the courts very well. The regulatory oversight model is not very appropriate. And I think there's sort of a in appropriate balance here between sort of new entrants and the existing entrants in that there's sort of a degree of unfairness for lawyers.

I also didn't think that it was conceptually ready because part of the driver for sort of some of the sort of difficulties in the regulatory is that it's still trapped in this view that it sort of sees legal services regulation as a law of lawyers and therefore hasn't found the sort of the tools to sort of accurately sort of analyze what it means if we have a corporation, provide legal services so it doesn't provide us with a good set of tools to sort of go forward with. And the status to function argument is just too limited. I think if we've got a new a new world full of new actors, we need to have a very open conversation about how we're going to sort of regulate it. And then of course, I thought the sort of regulatory problem of non-lawyers hasn't been solved partly because of the regulation by lawyers strategy, but also because of the issues around the failure to actually sort of deal with that little legal nature of the relationship sort of between clients and the sort of the IP.

So what are sort of the social justice implications for this? To my mind, it seems to me that we've got to have a really sort of good tool that could potentially be used to provide access to justice and could be used in a way that sort of supports the courts and supports the rule of law. But we don't have the regulatory framework in place that's likely to sort of get us there. And so my own next tranche of work is really to deal with, you know, what do I think sort of a good regulatory model will look like? And then that's it.

I'm sorry for speaking at you all so furiously for such a period of time. Thank you so much, Jane. That was amazing.

I think we'll open up now to our audience for some questions. What would work best is if you have a question, you can either put your hand up and we'll let you speak or if you would prefer, we can also answer questions from the chat. So I think while we're waiting for those to come in, Jane, I might sneakily ask a question of my own, if that's okay.

Naturally, you've just told us that your next bit of research is going to be about the regulatory model that you seem to be most fit in solving this issue with AI and how it doesn't adequately address issues of justice and also is not very serving to either lawyers, the legal profession or the courts. I'm interested to know if you're able to discuss with us before your next bit of research, what do you think would be the first best step in a good direction for using AI to meet access to justice demands? And what can especially young lawyers but also law students, do with the AI models that we currently have on hand to ready ourselves for what that might look like in the future.

Okay, so what can we do to enable sort of artists to make the access to justice needs? I think that I think that sort of question of design. So we can't sort of control sort of the world, but we can control what we do so we can get involved with people who are sort of interesting in sort of promoting access to justice and sort of really think about sort of what would a good AI system look like and then sort of help to build it. And of course we've got sort of Asli here who are sort of thinking about that and they're sort of really sort of thinking about how they sort of use their own existing tools and they've got great thoughts about what does good design sort of look like.

So I think there are sort of opportunities if we sort of came to take them to actually sort of take the tool and sort of design it in a way that's really good and we can sort of, you know, have a parallel sort of stream of work around how do we make sure the regulatory model is in place so that we've not just got, you know, good people doing good things, but we sort of make we sort of prevent the harms. So I think that's sort of how we can get good AI out there. And there's actually there's a lot of work that's already been done about when it's suitable to use AI. So empirical research about who is a good audience for it, what kinds of tasks there are, etc. so we can do sort of some really positive things. Now we don't have to wait for the regulatory model to sort of catch up.

So young lawyers who are so, you know, dealing with sort of this question as to how do I sort of deal with sort of this whole AI thing. I think there are sort of a couple of things. So one is understanding what I can do, where its limits are and sort of making sure that you're sort of proficient as a user of AI. So if you sort of, you know, going for jobs and things like that, you can sort of demonstrate that you've got sort of capability. And the other is to sort of think about what kind of career path you want and do you necessarily want to sort of go down stream of being sort of a lawyer who sort of provides advice, or are you interested in a sort of an alternative sort of career path? So I've sort of had a couple of different careers in law. And, you know, one of my careers in law has been around managing designing systems, implementing sort of technology systems. And there's there's a lot of opportunities in that space. So I sort of talking to someone who was a big firm recently and their teams are just expanding. So the things that would help you get into those kinds of roles are being across things like design thinking and potentially sort of project management, but sort of, you know, having sort of a an extra string to your bow in terms of your sort of skill set. Does that make sense?

Yeah, it does. Thank you so much. We've got a few questions popping up now in the chat. I think we'll take we'll go with the first one and then maybe we'll do two of them. If we run out of time, we can look at others. If we end up having more time. But our first one is from and forgive me if I pronounce this wrong. Yes. Abernathy, who wants to clarify when you say AI, you referring to generative or other forms, and would legal ethics also apply to AI that focus more on dock management rather than responding to sensitive client data? And then also theoretically, if the profession begins to over rely on generative AI to provide advice, will this risk undermining the. I also don't know how to pronounce this Hegelian dialect in law as a generative A.I. would limit the generation of new ideas or challenges to present and slow the advancement of legal doctrine. The question is in the chat, if you need to reference it again.

Okay, So I'll I'll answer this of the first question first. So when I was thinking of sort of AI, I thought we did a lot of work around what were the different forms of AI. And I actually sort of started my research before sort of GenAI became a thing. And so I had a view that what I was looking at was any form of sort of tool that would provide some form of legal assistance. So it would include, for example, GenAI. So when sort of GenAI I sort of came on sort of saying it didn't really disrupt my research because my sort of concept of I was I was sufficiently broad, but it would also include, say if you've got sort of a rules based sort of expert system that's providing sort of a particular type of, you know, output as a sort of a document, it would include that. And if you were using your sort of document management system in a sort of mentor where you're sort of applying sort of AI to say, predict the outcome or or identify sort of the best clause that would sort of apply to that. So I certainly did have those a broad sort of concept of sort of AI my mind when I was doing the research and I really was thinking about its functional use as a as a tool to sort of assist people get legal help. So it's a big category sort of AI. And really on sort of thinking about what are the sort of the tools that sort of assist people. And I think that's the perspective that we need to sort of keep in mind, like because a lot of the things that I was reading were talking about how they're going to be used by lawyers or how they're going to be replace loads. But I think I think sort of the focus is on how are these tools going to sort of assist people. So that's kind of my I what was I thinking about in relation to AI? I hope that answers the question.

And then the question about the overreliance on AI, I think is a great question. So GenAI in particular, it's just a prediction of what's the most likely sort of set of words or text or vision or etc., that matches your sort of prompt. And it maybe it can be creative, but maybe it can't. And I think sort of a one of the things that sort of AI for us is a question as to what is the sort of what is the nature of the legal work that we do and do we need to have that dial up. So the sort of Hegelian sort of like kind of debate, do we need to have that sort of dialog carried on by a human? Do I think I might prevent that potentially? I think if that answers the question, I hope that answers the question.

Yeah, Very interesting. I'll take another question from the chat and then we might have Anthony, who's raised his hand. We've got a question from Gray asking what you think about GenAI being trained in large amounts of nonspecific data, i.e. data not specific to any legal cases. And how do you think this impacts the quality of information that GenAI can give?

So that's that's a great tool. So the great question, the people that I know who are sort of implementing sort of AI in legal aren't using sort of general tools because they don't have sort of the right data set to be able to sort of produce a legally sound result. So if you are just training sort of the AI system on just sort of a general data set, it might come up with an answer for you, which sounds plausible. But if it doesn't have sort of the legal content in there, it can't produce sort of a legally sound result because it doesn't have the right sort of materials to generate it from. So for the time being, the people that I know who are implementing, like seriously implementing AI on legal are looking at sort of tools that are specifically trained on sort of legal materials. And when I say sort of legal materials, I'm not sort of just confining it to cases and legislation, but if you're in a large firm, you've got like an enormous sort of wealth of legal material sitting in your document management system, in your accounts management system, etc., that that will give you insights into what does what's a sort of typical court transaction look like? What does a typical answer look like? What does a typical document look like? So I think you have to have good data to get good results.

Yeah, that's some great insight. Thank you so much, Anthony. I've seen you've put your message in the chat. If you'd like to pop in and speak on the call, you're welcome to do so now. Otherwise I can read it out.

Can you hear me? Yeah. Amazing. Yeah. So I'm just going to read it out. Given GenAI development is in a state of flux. Is it too early to regulate? And then what's the best way to balance the need for protection that regulation provides against this rapid technological changing landscape?

So sorry, The question was, is it too early to regulate. Or I guess like that's kind of priming probably the second question, which is probably the actual question, which is how do you balance like regulatory concerns around this change, like this rapid technological development? Because it's an ongoing kind of it's a it's developing very, very rapidly. And changing kind of, you know, month to month or, you know, yeah, in six months. I've seen it change so much in 18 months. So I'm just wondering how you like how do you balance that, those two concerns, because you do need the protections that regulation provides, but then you kind of in some ways building on shifting sands as well. Yeah.

So the shifting sands is an interesting sort of idea because in some ways we're not building on shifting sands. So one of the things which I think we're clear about, at least in the legal services space, so it's different in sort of other areas is what it looks like. So, you know, we know we want the information to be kept confidential. We know that we want a duty of loyalty. So I don't think we're in the position where we actually want to dilute the protections that we have. So to my mind, they can be articulated as do I requirements, so that it can guide the people who are sort of developing their AI. So instead of, you know, designing in the dark, we can sort of clearly say this is what good AI looks like so that you create a target. So you see the regulation not as a prohibition, so not a saying, start saying don't, etc. but you really sort of set the target. So, you know, we say a society has a target you build, this will be very happy. You build something that doesn't fit within our target. You know, we might sort of, you know, introduce further regulation. So I think it needs that sort of clarity as to sort of what we want to get to, which I think is easier in legal than in other domains. But I also think it needs a sort of a very sort of active regulator who gets good information about what's happening so that there's this dialog between sort of the regulator and the people sort of participating in this space, because it's at the moment sort of one of the things that's really hard with the current regulators is like it's impossible to know what's going on. And so there needs to be some structure that enables the regulator to get information we might want to have like, you know, sandboxes. So the Victorian Legal Services Commissioner actually has sort of regulatory sandboxes where they can sort of have a dialog around let's try this, let's try that, let's see if it works, let's see what sort of regulatory adjustments we might have so that it becomes something that's just not foisted on us. But we work together to get because I think I'm certainly not against it. I just think I want I want, I want good I don't want sort of the, you know, slop.

I agree. Amazing. Thank you so much. Unfortunately, we're out of time. So all existing questions in the chat that haven't been answered. We will pass on Jane to you via email. So if anyone has a question, you can send it to our team and we will pass it along.

I'll move now to the vote of Thanks on behalf of the UTC Faculty of Law and the Law Students Society. I would like to express our sincere gratitude to you, Dr. Jane Hogan, for so generously taking the time to share your insights on deregulation and social justice. We have been so very fortunate to hear your insights. And as prospective lawyers, I know I speak on behalf of everyone here when I say that learning about this research and your contributions to the use of AI in legal services, it's been very enlightening and very inspiring. We are so fortunate to hear your perspectives and this discussion has been again, just inspiring and thought provoking. We're so grateful for the opportunity to engage with such critical issues and learn from your dedication.

I'd also like to thank our team, say thank you to Anthea Beck, Chloe Mann and Monica, along with the entirety of the Brennan Program team and your efforts in organizing and facilitating tonight's event to all your attendees, your participation is very invaluable, especially to those who interacted and asked questions. We really hope tonight's discussion will serve for you as a catalyst for further engagement and reflection and social justice and the Brennan program and a reminder that you will automatically receive five RJ for attending tonight's event. So good on you will also encourage all of you here today to engage with this topic through further reading, which you can reflect upon for additional RJ By submitting a 350 word reflection to an ad hoc form inquiry hub.

Also, given the interest that we've had tonight, you'd also like maybe to know about our upcoming tech and social justice events taking place on Tuesday and Thursday this week, which will explore how technology can drive social change. Brennan Students who attend all three events this week will earn five bonus RJ points. In addition to this, please please join us for the sixth Brennan Justice Talk with Professor Karl Rhodes Drawing on his book The Stinking Rich Four Myths of the Good Billionaire. On the 30th of September at 6 p.m. on campus with refreshments provided. Oh, that was a mouthful.

Thank you again to our incredible guest speaker. Thank you all for attending. And for those of you who had questions that we weren't able to answer, please send them through and we will send them off to Jane for more of her incredible insights. Thank you so much, everyone. Thank you. Thank you, everyone. Thank you very much.