BLOG

Law Justice Chatbot

Future Law: Reflecting on Access to Justice

After attending CodeX Future Law at Stanford in early April, my takeaway was that we are only solving a small piece of the huge access to justice problem. Please do not misunderstand, the topics and speakers were excellent. However, other than Professor Gillian Hadfield’s keynote, we appear fixated on the big legal issues, including large corporate challenges with few exceptions on the chatbot panel. Implementing change at only an elite level will not solve the problem for the hundreds of millions of Americans and billions worldwide who cannot access justice. Instead, we need to question and redesign every process, service, including who is providing what service, and go outside the law for solutions to everyday legal problems.

Hadfield’s book, Rules for A Flat World was at the centre of her talk by the same name and subtitled Why Humans Invented Law and How to Reinvent It for a Complex Global Economy.

Professor Hadfield commented that legal aid will not solve our access to justice issue and that quality in law should be redefined as, “Are you solving the client’s problem?” If you step back from the day’s discussions about predictive analytics and rules systems, it boils down to solutions for the average citizens’ legal problems, which include law enforcement, family issues, housing, immigration, and some small business challenges.

As with health issues, we often go first to the internet to triage our symptoms. However, most do not substitute online advice for a visit to a medical professional.  While there are alternatives to an M.D such as nurse practitioner or D.O., law only has a couple of options other than an attorney and one is doing it yourself.  A recent Avvo study on legal consumers found that one in five people feel that they can do research online to replace a lawyer’s knowledge and about a quarter seek help from non-lawyer friends. Almost a third seek out free consultations from lawyers. At issue is cost and lack of understanding of the lawyer’s value plus the notion that attorneys are intimidating to the average citizen.

After Future Law, I interviewed Lucy Endel Bassli, Assistant General Counsel, Legal Operations and Contracting for Microsoft, who spoke on the Future Law customer roundtable. Lucy expanded on the need for alternative providers of legal services, “There will need to be a change in some regulatory and bar policies if we are to see true advancement in access to justice. While technology is a key factor in increasing access, we also need to increase the number of people are allowed to provide legal services. Today so many steps in the legal processes are limited to licensed attorneys, that the general public is completely excluded from some very basic tasks, which actually should not require a JD or bar license.  Requiring that only licensed attorneys perform certain tasks in our court systems prevents the general population from accessing basic relief and resolution of uncomplicated legal issues.”

Closing the education gap between citizens and attorneys is a huge opportunity for the legal profession. The average American does not understand the need for an attorney and the value of expert advice, even for their small business. Group legal services companies are addressing the consumer market by providing affordable access to attorneys but it’s not enough if the regulations stay the same. More programs that allow “legal work” to be done by alternative providers like Limited Licensed Legal Technicians in Washington State and technology like the DONOTPAY chatbot or client document automation for immigration are needed to address the eighty percent of people who cannot find help for their legal problems.

Also at issue is that many lawyers resist technology because they believe some of the chatbots and expert systems will eliminate their jobs. This notion that artificial intelligence (AI) will completely replace attorneys is misguided. At Future Law, Joshua Browder spoke about his app DONOTPAY which helps UK citizens fight parking tickets and explained how he expanded the technology to help Syrian refugees and the homeless. Lawyers should embrace enabling technology to replace routine tasks to free them up to exercise their professional legal judgment on matters that require attorney expertise.

However, Lucy, who is also a self-described Legal Services innovation evangelist, explained that technology cannot just be added to the law, “Innovation is too quickly associated with technology and automation. Before automation will be truly impactful and helpful on a broad scale, the existing processes need to be reassessed and simplified. There is simply too much procedure and process in our legal systems, whereby automating it would only automate an unnecessarily burdensome and lengthy experience.”

Simply continuing to offer the same legal services, even adding enabling technology, will have the law go the way of the taxi industry, and the access to justice problem will need to be solved by other types of professionals, who are not lawyers. One of Hadfield’s conclusion, “Don’t leave it to the Lawyers” makes sense but attorneys and alternative providers should collaborate to open up the law to the average citizen. We need to take a hard look at the services provided and decide if a JD is truly needed to access that huge market of unmet need. In other words, assess what will solve the citizen’s legal problem in a timely and affordable way onwards.

-Forbes

By: Anatoly Khorozov / June 23, 2017

Chatbots get the thumbs down for complex legal advice

Whilst most would refuse to take advice from a robot for complex cases, one in five British people would put their faith in an automated, robotic service – such as a chatbot – to provide simple legal advice.

Those in London would be most comfortable taking automated legal advice (32 percent), shortly followed by individuals in the South East and Scotland both at 22 per cent. However, the new survey made it clear that while there were certainly more general automated tasks people were happy to embrace, the vast majority valued the human element in their legal advice more overall and specifically when it comes to actionable advice.

Reveals types of legal advice consumers would accept from a robot

The survey, undertaken by digital transformation company CenturyLink EMEA, also revealed which kinds of legal advice consumer would take from an automated service and legal advice they would take from a robot, and at which stage of the process-automated advice would be most trusted: Some 19 percent would trust a robot to manage and speed up the process of their case, 29 percent of 16 to 34-year-olds felt speed was of particular importance while only 16 per cent of 45-year-olds and above valued the speed of a service.

Automation for general tasks

Conducted by Censuswide, the survey which quizzed more 1,200 consumers, found that a further 15 per cent of those questioned would trust an automated service to send and manage relevant documents for their case, such as passport scans or proof of address documents, and 14 percent would trust automated services to advise them on which law firm would be best for their case.

Actionable advice needs human element

However, the research also revealed that only one in 20 or six per cent would take actionable advice from a robot, thus removing the need for a human lawyer. The data reveals a clear requirement for human interaction at some point during the legal process and concerns around the source of robot-led advice is evident. For example, nearly half (45 percent) of consumers felt advice would lack human knowledge and more than one in three felt that the advice given wouldn’t be unique or bespoke enough for them with a further 31 percent worrying about where the information they provide would be stored, or shared elsewhere.

‘Consumers have been loud and clear’

Steve Harrison, Regional Sales Director of legal services CenturyLink EMEA, commented: ‘When it comes to the use of robots in the legal sector, consumers have been loud and clear. While there is room for the use of AI and chatbot-led practices, human input should still lead the way.’  However, he added: ‘Alongside this, there is definitely a requirement for law firms to embrace technologies, such as robotic and automated services. With most consumers’ saying that they would trust a robotic service in the early stages of a case, this is where legal firms can stand to gain. By utilising such technologies in the initial stages, they can dedicate time to the more bespoke services – for which 35 per cent of consumers’ value highly.’

The Global Legal Post 

By: Anatoly Khorozov / June 22, 2017

How Lawyers Are Using Social Media in 2017

For the third year, legal practice website Attorney at Work has conducted an annual survey that reports on the social media habits, preferences and attitudes of attorneys. The latest survey, conducted in February 2017, gathered responses from 302 lawyers.

While perhaps not statistically significant, these responses are interesting to note in terms of identifying trends in the legal profession regarding the use of social media marketing.

The report found that:

  • 96% of responding lawyers say they use social media
    • 84% use LinkedIn
    • 80% use Facebook
    • 59% use Twitter
  • 70% of responding lawyers say social media is part of their overall marketing strategy
  • Facebook is the most regularly used platform (48%)
  • Platforms most successful for bringing in business: Facebook (31%), LinkedIn (27%), Twitter (5%)
  • 67% handle all of their social marketing activities themselves
  • 23% get some help with handling their social marketing
  • 10% farm out their social marketing
  • 38% use social marketing tools like Google Analytics, HootSuite, Buffer
  • 94% of solos say they use social media, up 10% from last year
    • 82% of solos use LinkedIn
    • 78% of solos use Facebook
    • 60% of solos use Twitter
  • 40% of all respondents use paid social advertising; 50% of those use Facebook ads

For solos and small firms, the goal of social marketing is lead generation and business development, pure and simple. How you get there is by building targeted relationships, providing solid content, and consistently adding value. Attorneys we have worked with The Rainmaker Institute regularly receive 100-250 new leads every month just from efforts online and via social media.

 

By: Anatoly Khorozov / June 22, 2017

Artificial intelligence: The Bots Come Marching In

A robot lawyer donning black robes and presenting a case before a sessions judge is certainly in the realm of science fiction. But the entry of Artificial Intelligence (AI) into the legal profession is no longer a mad scientist’s daydream.

When Cyril Amarchand Mangaldas (CAM) tied up with Canada-based, Kira Systems, earlier this year to develop customised AI machines programmed for executing legal tasks, it became the first law firm in the country to publicly announce that it is transitioning to the use of this cutting-edge platform. In simple terms, tedious, time-consuming tasks like collecting data, searching records, going through old cases, fact verification, etc—currently done by junior lawyers and paralegals—will soon be left to AI machines to handle. “AI will make law firms more responsive and swifter. We don’t know about others, but we certainly believe in being the first, and are the first, in embracing AI technology in India. In the West, only a few peer firms have taken up this challenge. In many ways, we are pioneers, globally,” said Ashok Barat, chief operating officer of Cyril Amarchand Mangaldas, to India Legal.

While Barat may claim to be the first to use cutting-edge AI technology, a few top-rung firms like Nishith Desai Associates already use data management systems. But it is a recent development. Even in its basic form, AI has been in use in the legal field for no more than a year. However, developing a dedicated and elaborate platform tailored to understand the nuances of our legal system and to operate in the Indian environment is surely a first of its kind.

ROSS, the ResearcherDeveloping a customised software of this kind will be time-consuming. So, Toronto-based Kira Systems and its client are not looking for any quick fixes. Explains Barat: “In order to make the technology fully operational, it must first learn to understand various kinds of documents, agreements etc. in the context of Indian laws, regulations, customs and precedents. This is the phase that we are currently in—teaching the system the Indian legal framework and its nuances. This takes time. It is a serious investment of both resources and intellect and will bear fruit in due time.”

Since it is “work in progress”, CAM is understandably unwilling to share specifics about the AI platform that is being developed. But the website of Kira Systems, a leading machine-learning software provider, gives some clues. For a legal firm like CAM, the company can develop software that uses AI to identify, analyse and extract clauses and other information from contracts and other types of legal documents. It includes machine-learning models “for a rage of transaction requirements across a firm’s practice areas”. Kira’s clients include global law firms DLA Piper, Freshfields, Clifford Chance, WSGR, King & Wood Mallesons and Torys.

Noah Waisberg, co-founder and CEO of Kira Systems spelt out her company’s association with CAM: “The firm (CAM) has outlined a clear transformation and innovation strategy to us that includes not only the adoption of artificial intelligence, but the productisation of their legal knowledge and the introduction of unique skills not yet seen in the Indian market. We’re honoured to be part of the firm’s exciting journey.”

However, the big ticket entry of AI has brought into focus several questions: Will its introduction prove to be a great game-changer in the legal business? Will other firms follow in the footsteps of CAM? Will those entering the legal profession have to come to terms with part of their territory being taken over by machines? And the most important question, will the curriculum followed by law students now have to be tweaked appropriately to make provisions for the AI factor?

These are still early days yet, but some believe that the entry of Bots will signal a paradigm shift in the quality of lawyers and paralegals hired by law firms in future. It will also signal the rise of lean and efficient operations in which employees will be hired for their intellectual skills and not for hard work alone. Barat said: “Lawyers will have more time to handle substantial tasks that test their talent and knowledge rather than doing routine analysis.”

According to a recent study by international consultant Deloitte, approximately 39 percent of legal jobs may become redundant in the UK in the next decade or so. The job loss, it said, would become very visible from 2020 and there would be “profound reforms” which could change job profiles in law firms in Britain.

To quote Peter Saunders, lead partner for professional practices at Deloitte: “Further technological advances over the next decade mean that future skill requirements across all roles will change. Our report shows that firms have already identified a mismatch between the skills that are being developed through education and those currently required in the workplace. Employers will need to look for lawyers who are not just technically competent, but who have a broader skill set.”

Such alarming job loss statistics may not be relevant to India, at least for now. Jayesh Kothari, senior associate with DSK Legal, Mumbai, and blogged some observations in the context of CAM going in for AI:

“Though AI may not completely replace lawyers, the overall impact of AI on the hiring processes of law firms remains to be seen especially in the Indian context. In addition to providing quality legal services, law firms will have to ensure that their lawyers are better equipped with sound negotiation skills, strong legal/technical interpretation and effective client management skills to enjoy the advantage over other law firms who are already equipped with AI or are in process of migrating to AI. It will be interesting to see how Indian law firms alter their existing business models to offset the impact caused due to AI on their revenues.”

That AI is here to stay is a reality that has been accepted by the legal profession in the West. With Indian firms taking on foreign clients and bringing arbitration cases to this country, it is clear that they must adapt to new technologies to be competitive at the global scale. The AI factor cannot be dismissed. Cyril Shroff of CAM stated after the tie-up with Kira Systems: “Our clients expect us to be at the cutting edge of the practice of law, and of the business of law. For us the business challenges of our clients come first, and to tackle that we need a suite of different tools and advanced skills.”

If AI will impact the functioning of lawyers and law firms, then it should necessarily reflect on the curriculum that is devised for students of the future. When contacted, a faculty member of the National Law University, Delhi, told India Legal that the AI factor has not yet found mention in the current syllabus. But he said that it will have to be incorporated in the near future: “If law firms are beginning to use AI, then we will have to take cognizance of it because it will alter the way lawyers are expected to function.”

According to Professor Richard Susskind, IT adviser to Lord Chief Justice of England and Wales, change is certain and the legal profession and education have to quickly adapt to it. He says that the legal profession has until the early part of the next decade to prepare for “massive technological advances” that shall “reshape” the industry scenario.

The picture that Susskind painted of the future at a lecture at Law Society, London, is appropriately revealing: “It is not that there will be no jobs in the future, but the 2020s will be a decade of redeployment. It is not an emergency but over the next five years we have to prepare. More and more legal services will be enabled by the support of new technology. You can say ‘that is for the technology industry to sort out,’ or you can be part of the technology industry.”

Susskind was sharply critical of law schools across the world which he felt were churning out “20th century lawyers” who may be irrelevant in the not so far away future. “Young people should not enter law if they want to replicate the type of law seen in television shows like Suits… Start preparing now. We as a profession have about five years to reinvent ourselves, to move from being world-class legal advisors to world-class legal technologists.”

Closer home, Barat said: “Law colleges now will definitely need to embrace a curriculum that is designed to inculcate not just appreciation but also working knowledge of tools that involves use of technology in every sphere of their working.”

Professor Susskind’s advice was meant for a western audience but the legal profession in India should recognise its relevance. Remember India is a hub of software professionals and AI is being developed here too. Last month, Infosys announced that it had exported its AI platform Mana to process contracts for a bank in Asia which would have required a team of at least 10-15 dedicated lawyers. Vishal Sikka, Chief Operating Executive of Infosys was quoted as saying: “We had an astonishing experience with a client in Asia…We were able to eliminate a team of lawyers by using Mana to analyse non-disclosure agreements and many other vital contractual documents….”

Meanwhile, the software company’s competitors have developed their own AI platforms. WIPRO has Holmes, TCS, Ignio and Tech Mahindra, TACTix. The writing is in the code.

-Kashmir Monitor

By: admin / June 12, 2017

Can robots make moral decisions?

What happens when artificial intelligence has to make tough moral choices—say, a self-driving car that must decide whether to avoid hitting a child, even if it means plowing into an oncoming vehicle full of adults?

In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner (Will Smith) or a child. Even though Spooner screams, “Save her! Save her!” the robot rescues him because it calculates that he has a 45 percent chance of survival compared to the child’s 11 percent. The robot’s decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make?

Isaac Asimov circumvented the whole notion of morality in devising his three laws of robotics, which hold that:

1. Robots cannot harm humans or allow humans to come to harm.
2. Robots must obey humans, except where the order would conflict with law 1, and 3.
3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2.

These laws are programmed into Asimov’s robots—they don’t have to think, judge, or value. They don’t have to like humans or believe that hurting them is wrong or bad. They simply don’t do it.

The robot who rescues Spooner’s life in I, Robot follows Asimov’s “zeroth law”: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what’s in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zeroth law, a robot could take out the gunman to save others.

Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (What about emotional harm? Is replacing a human employee harm?). Abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.

Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful that an algorithm can do that—at least, not without some undesirable results.

A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?

Self-driving car developers struggle with such scenarios. MIT’s Moral Machines website asks participants to evaluate various situations to identify the lesser of evils and to assess what humans would want driverless cars to do. The scenarios are all awful: should a driverless car mow down three children in the lane ahead or swerve into the other lane and smash into five adults? Most of us would struggle to identify the best outcome in these scenarios, and if we can’t quickly or easily decide what to do, how can a robot?

If coding morals into robots proves impossible, we may have to teach them, just as we were taught by family, school, church, laws, and, for better and for worse, the media.

Of course, there are problems with this scenario too. Recall the debacle surrounding Microsoft’s Tay, a chatbot that joined Twitter and within 24 hours espoused racism, sexism, and Nazism, among other nauseating views. It wasn’t programmed with those beliefs—in fact, Microsoft tried to make Tay as noncontroversial as possible, but thanks to interactions on Twitter, Tay learned how to be a bigoted troll.

Recently, Google’s DeepMind got “highly aggressive” in a simulation of fruit picking, where competition got tough, and resources got scarce. In a study, Deep Mind researchers had AI “agents” compete against each other in an apple gathering simulation, where each had to collect as many apples as possible. In the process, they could temporarily knock out an opponent with a laser beam. When apples were abundant, the two agents didn’t shoot at each other. But when apples became scarce, they became more aggressive. Researchers found that the greater their “cognitive capacity,” the more frequently they attacked their opponent. In a second simulation, two agents were designated as “wolves,” with a third the “prey.” If the wolves worked together to catch their prey, they received a higher reward. In this simulation, the more intelligent AI agents were less competitive and more likely to cooperate with each other.

Stephen Hawking and Elon Musk have expressed concern over AI’s potential to escape our control. It might seem that a sense of morals would help prevent this, but that’s not necessarily true. What if, as in Karel Čapek’s 1920 play R.U.R.—the first story to use the word “robot”—robots find their enslavement not just unpleasant but wrong, and thus seek revenge on their immoral human creators? Google is developing a “kill switch” to help humans retain control over AI: “Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions.” That solution assumes watchful humans would be in a position to respond; it also assumes robots wouldn’t be able to circumvent such a command. Right now, it’s too early to gauge the feasibility of such an approach.

Spooner’s character resents the robot that saved him. He understands that doing so was “the logical choice,” but argues that “an 11 percent probability of survival is more than enough. A human being would have known that.” But would we? Spooner’s assertion that robots are all “lights and clockwork” is less a statement of fact and more a statement of desire. The robot that saved it possessed more than LEDs and mechanical systems—and perhaps that’s precisely what worries us.

– Lima Charlie News

By: admin / June 9, 2017
Artificial intelligence Lawyers

Artificial intelligence takes on white-collar duties

Maybe it’s unfair that some people think tax lawyers have the personality of a robot, but Benjamin Alarie considers that to be a plus.

A Yale-trained lawyer himself, Mr. Alarie’s Toronto firm, Blue J Legal, harnesses artificial (or augmented) intelligence (AI) to help lawyers and their clients work their way through the complications of tax law.

“It’s a way to supercharge the legal system. We take hundreds of cases on different legal questions and train AI on how the courts make those decisions, so users can run predictions on how the courts might decide a new case,” he says.

Blue J Legal is at the cutting edge of a wave of new uses for AI. Robots, which have already taken over manual labour and factory work, are finding their way quickly into white-collar and professional jobs that require judgment and thinking.

“I think the nature of most white-collar jobs will drastically change in the future because of AI,” says Henry Kim, associate professor of operations management and information systems at York University’s Schulich School of Business in Toronto.

“It’s not to say that all the professional jobs will go away, they’ll just be different,” he says. AI is not only worming its way into law, but also finance, medicine and complex areas such as the development of new pharmaceuticals.

In finance, “Artificial intelligence can help people make faster, better and cheaper decisions. But you have to be willing to collaborate with the machine, and not just treat it as either a servant or an overlord,” says Anand Rao, a partner at PwC Analytics and expert in AI.

“Each sector applies AI differently,” PwC’s Financial Services Institute says in a recent report.

“For example, insurance leaders use AI in claims processing to streamline process flows and fight fraud. Banks use chatbots to improve customer experience. In asset and wealth management, AI adoption has been sporadic, but robo-advisors are rapidly changing that,” the PwC report says.

In the pharma sector, “computational drug discovery has actually existed since the 1970s. It’s not necessarily new thinking, but with the advent of AI there are unique opportunities,” says Naheed Kurji, chief executive officer of Cyclica Inc., a Toronto-based company that is harnessing AI.

“We believe that the old way of discovering medicines is inefficient and largely broken. There is an opportunity to use computational powers to get better medicines in the hands of consumers faster and at a lower cost.”

He points out that since the 1970s, the power of computers has increased more than a hundred-millionfold, or eight orders of magnitude.

“Even the iPhone 4 – already superseded by three newer iPhones – has more than double the computing power of the Cray-2, the world’s fastest supercomputer back in 1985,” Mr. Kurji says.

AI is taking this computational clout even further by adding a wider range of intuitive thinking to robotics than the traditional binary, yes-no deductions.

For the drug industry, “to put it into perspective, the medicines we take interact with many aspects of our biology – some intended, accepted and understood, some not. We all know the latter as side effects,” Mr. Kurji says.

AI can look at multiple, sometimes unanticipated, side effects and develop the formulas for drugs faster.

“We need quicker, more robust decisions when it comes to regulatory approval,” Mr. Kurji says. “It keeps taking longer to determine the safety and efficacy of medicines, outside of those for ‘blockbuster’ diseases like cancer and heart disease.

“There has been an eightyfold decline in productivity [the time it takes for a drug to be approved] between 1950 and today,” Mr. Kurji notes.

Like Mr. Alarie and Dr. Kim, Mr. Kurji believes AI is augmenting, not supplanting, traditional professional insight and advice. Cyclica’s scientists are not robots’ custodians: “We are experts in biophysics, bioinformatics and computational biology, with strong supporting capabilities in machine learning.”

Rather than giving up, Dr. Kim thinks that professionals will need to adjust. Doctors, for example, already use computational knowledge to look at patients’ symptoms and see where they fit in the spectrum of previous patients who have shown up in waiting rooms with similar problems.

“When AI analyzes collective knowledge from medical journals without bias and this is accessible, then the doctor knowing more may not be as much of an advantage. Creativity, empathy, flexibility, common sense and thinking on your feet become more important,” Dr. Kim says.

“These are all things that AI will not be able to do well for quite a while.”

Similarly, he thinks there are many aspects of the law that even the most sophisticated AI today will miss.

“The law has a lot of subtleties. In contract law, for example, people try to get to the intention of the contract – what did the two sides really mean when they signed?” he says.

This requires intuitive skills that are beyond the range of AI – for now.

-The Globe and Mail

By: admin / June 8, 2017
LAW and AI

Tech and more; Innovation in the Legal profession

There is at least one thing CDs, Walkmans, fax machines, Gameboys, landline phones and dinosaurs have in common: They are all extinct. (Well, almost; dinos tend to live their second renaissance, don’t they?) Why would the traditional way of providing legal services be any different? Why do we think that the legal profession might remain intact in the storm of the tech revolution? Well, we had better not!

Highly ranked law firms seem to follow the new winds and have already recognized the advantages that cognitive computing and artificial intelligence (AI) can offer to the legal profession. Using artificial intelligence in the legal service sector is science-fiction no more. AI may be utilized in the industry in many ways, from document processing through litigation to advisory work.

The importance of cognitive technologies is the most obvious in document-heavy areas of law, such as due diligence procedures, compliance works, investigations or litigation. To take one example, KIRA™ – an integrated contract analysis platform – easily processes large amounts of documents. Besides creating an electronic data room, it is capable of preparing the backbone of a due diligence report. Clearly this can save a lot of time for law firms, and money for clients. According to one recent experience at DLA Piper, by the use of this application, it was possible to process and review half a million documents by a small team in only two days.

Another potential use of AI in the legal service sector is calculating probabilities and predicting outcomes of legal disputes and proceedings. This tool heavily builds on a specific database (court dockets) and uses data mining and predictive analytic techniques to forecast outcomes of litigation. No doubt, such solutions seem to be far away from our domestic market everyday realities; nevertheless one should be aware that this is already happening in other parts of the globe.

When it comes to advisory work, AI is now ready to take away commodity works. To put it simply, AI tools can solve any legal problem, provided that such a legal problem has been solved before and uploaded to the internet.

Appealing indeed! Still, all of the above does not make AI a law professor. Cognitive technologies may ruthlessly find and adapt already existing solutions. Solutions, previously created by human lawyers. Give AI a brand new legal problem, with a blurry regulatory background, which is often the case in real life, and the cognitive technology will most probably be clueless. And this leads us to an important question:

Is it only shiny and trendy apps that are capable innovation in the legal profession?

I would doubt so. Clients expect value-for-money services and high-quality work performed at the same time. And that should be governed by humans. No top-tier lawyer can afford to lose high-quality assignments thanks to less effective organization or the lack of talented experts, just to name two crucial fields of innovative solutions.

It is not a surprise then, that previously unseen project management mechanisms are infiltrating to the legal service sector. Project management is also essential for planning workforce and budget, as well as ensuring you keep to the allocated budget.

Furthermore, meaningful and tailor-made learning and development (L&D) programs have the simultaneous benefits of attracting talented graduates as well as increasing the level of job satisfaction, motivation and engagement of all fee earners by providing a strong value proposition. L&D programs convey the message that fee earners do matter to the firm and that the firm is ready to invest in them. At the same time, innovative L&D programs build reputation, brand and the promise of a consistent quality on the market.

To sum up, cognitive technologies do not make highly qualified legal professionals redundant. On the contrary, innovative tech solutions alone may be worthless. These must be seen as tools allowing legal service providers to focus on the real legal issues, instead of spending empty hours on document review and processing. Hence, sustainable innovation in the case of the legal service sector stands less for pure tech, but rather for a system run by human experts leveraging technology.

Innovation is clearly more than tech.

-BBJ

By: admin / June 7, 2017
Artificial intelligence and Law

Lawyers need to keep up with AI

For decades, novelists, scientists, mathematicians, futurists and science fiction enthusiasts have imagined what an automated society might look like. Artificial intelligence, or AI, is rapidly evolving, and the society we could only once imagine may be on the brink of becoming our new reality.

Simply, and generally, AI refers to the ability of a computer system to complete increasingly complex tasks or solve increasingly complex problems in a manner similar to intelligent human behaviour. Examples range from IBM’s Watson system that, in 2011, won a game of Jeopardy! against two former winners to emerging technologies fuelling the development of driverless cars.

AI is expected to have a profound impact on society, whereby intelligent systems will be able to make independent decisions that will have a direct effect on human lives. As a result, some countries are considering whether intelligent systems should be considered “electronic persons” at law, with all the rights and responsibilities that come with personhood. Among the questions related to AI with which the legal profession is starting to grapple: Should we create an independent regulatory body to govern AI systems? Are our existing industry-specific regulatory regimes good enough? Do we need new or more regulation to prevent harm and assign fault?

While we are at least a few steps away from mass AI integration in society, there is an immediate ethical, legal, economic and political discussion that must accompany AI innovation. Legal and ethical questions concerning AI systems are broad and deep, engaging issues related to liability for harm, appropriate use of data for training these systems and IP protections, among many others.

Governments around the world are mobilizing along these lines. The Japanese government announced in 2015 a “New Robot Strategy,” which has strengthened collaboration in this area between industry, the government and academia.

Late last year, the United Kingdom created a parliamentary group — the All Party Parliamentary Group on Artificial Intelligence — mandated to explore the impact and implications of artificial intelligence, including machine learning. Also late last year, under the Obama administration, the White House released the reports, Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.” The reports consider the challenge for policymakers in updating, strengthening and adapting policies to respond to the economic effects of AI.

In February 2017, the European Parliament approved a report of its Legal Affairs Committee calling for the review of draft legislation to clarify liability issues, especially for driverless cars. It also called for consideration of creating a specific legal status for robots, in order to establish who is liable if they cause damage.Most recently, the Canadian federal government announced substantial investments in a Pan-Canadian Artificial Intelligence Strategy. These investments seek to bolster Canada’s technical expertise and to attract and maintain sophisticated talent.
Lawyers can play a valuable role in shaping and informing discussion about the regulatory regime needed to ensure responsible innovation.

Ajay Agrawal, Founder of the Creative Destruction Lab and Peter Munk Professor of Entrepreneurship at the University of Toronto’s Rotman School of Management, says Canada has a leadership advantage in three areas — research, supporting the AI startup ecosystem and policy development. The issue of policy development is notable for at least two reasons. First, one of the factors affecting mass adoption of AI creations, especially in highly regulated industries, is going to be the regulatory environment. According to Agrawal, jurisdictions with greater regulatory maturity will be better placed to attract all aspects of a particular industry. For instance, an advanced regulatory environment for driverless cars is more likely to attract other components of the industry (for example, innovations such as tolling or parking).

Second, policy leadership plays to our technical strength in AI. We are home to AI pioneers who continue to push the boundaries of AI evolution. We can lead by leveraging our technical strengths to inform serious and thoughtful policy debate about issues in AI that are likely to impact people in Canada and around the world.

Having recently spoken with several Canadian AI innovators and entrepreneurs, I have identified two schools of thought on the issue of regulating AI. The first is based on the premise that regulation is bad for innovation. Entrepreneurs who share this view don’t want the field of AI to be defined too soon and certainly not by non-technical people. Among their concerns are the beliefs that bad policy creates bad technology, regulation kills innovation and regulation is premature because we don’t yet have a clear idea of what it is we would be regulating.

The other school of thought seeks to protect against potentially harmful creations that can spoil the well for other AI entrepreneurs. Subscribers to this view believe that Canada should act now to promote existing standards and guidelines — or, where necessary, create new standards — to ensure a basic respect for the general principle of do no harm. Policy clarity should coalesce in particular around data collection and use for AI training.

Canada, home to sophisticated academic research, technical expertise and entrepreneurial talent, can and should lead in policy thought on AI. Our startups, established companies and universities all need to talk to each other and be involved in the pressing debate about the nature and scope of societal issues resulting from AI.

As lawyers, we need to invest in understanding the technology to be able to effectively contribute to these ethical and legal discussions with all key stakeholders. The law is often criticized for trailing technology by decades. Given the pace of AI innovation and its potential implications, we can’t afford to do that here.

Law Times

By: Anatoly Khorozov / June 7, 2017

Company legal teams combine digital skills with law

In-house lawyers master many tools as their employers seek know-how across all sectors.

While many law firms are planning for their roles to be transformed by technology, that transition is already taking hold in their clients’ legal teams. “Our customers are changing, and the whole business is changing, so legal also needs to change,” says Rebecca Lim, general counsel at Westpac, the Australian banking group. Some industries are feeling the pressure more acutely than others. The top four in-house teams in this year’s Financial Times Asia-Pacific Innovative Lawyers report are drawn from financial services and ecommerce businesses where technology is having a dramatic impact. Technology is changing faster than the law and regulatory guidance in many cases. Much of the legislation covering banks, for example, was passed before recent technological innovations in the sector were even dreamt of. For instance, many customers now bank exclusively online and they expect the bank to connect the data it holds on them in order to provide a seamless service. In response, Westpac plans to expand its legal, secretariat and compliance team over the year to around 300 people. In doing so, Ms Lim is bringing in new skills by hiring recruits who have not necessarily followed the traditional law firm career path.

Read the full article at FT site.

By: admin / June 6, 2017