The use of AI applications in Turkey has gained significant momentum in recent years, presenting both opportunities and legal challenges. This article provides an overview of the use of AI in various sectors and highlights key legal issues arising from its autonomous capabilities. AI has found applications in diverse fields within Turkey, including healthcare, finance, transportation, and public services. The increased autonomy and decision-making capabilities of AI systems raise unique legal concerns. In Turkey, as in many jurisdictions, legal frameworks are evolving to address these issues. The ongoing development of legal frameworks aims to strike a balance between fostering innovation and protecting individuals’ rights and societal interests in the AI era.
Use of Artificial Intelligence (AI) applications in Türkiye
The growing use of automation and artificial intelligence technologies in all aspects of life presents a wide variety of legal challenges in an evolving regulatory landscape, not only for those who may be developing products in the field, but also for anyone who may use such technologies as part of their business. The Artificial Intelligence Practice brings together the talents and experience of lawyers throughout Bicak, collectively possessing a diverse array of technical and legal expertise and skills, and is well-positioned to assist our clients in identifying, addressing, and responding to these challenges.
Business involving artificial intelligence
Being an innovative business, especially involving artificial intelligence (AI), comes with its challenges and legal risks. As AI applications continue to advance and become integrated into various business models and industries, businesses’ exposure to litigation is likely to increase.
Companies and organizations can safeguard and mitigate against such risk by
- ensuring that the AI application was programmed correctly;
- maintaining documentation to show that the AI input was correct, appropriate, and not corrupted;
- sufficiently supervising the AI application and its output; and
- establishing guardrails against users misusing the AI application.
Potential claimants’ allegations
We anticipate three categories of allegations by claimants with respect to AI applications:
- Allegations of incorrect information created by generative AI, such as chatbots, image and voice generators
- Allegations of interference with other IT systems resulting in downtime or financial loss, e.g. if an AI application allegedly made adverse investment decisions
- Real-world accidents allegedly caused by AI applications, i.e. by autonomous vehicles, robots or AI-controlled industrial facilities
Against whom might such allegations be made?
Potential parties against whom respective allegations might be made include, of course, the developer of the AI system, its operator (if different from the developer) and its user. In many cases, there may be also be a distributor involved who may also face respective allegations. Finally, many of the above will have insurance coverage which professional claimant firms are typically also aiming at. It has also been suggested to create a separate legal personality for AI systems, as an “e-person”, which could make the AI system itself a target as well. However, the concept of an e-person has so far only found few supporters, rendering a lawsuit against an AI application itself unlikely in the foreseeable future.
What legal bases may be invoked in civil disputes revolving around AI?
From a legal perspective, potential claimants may seek to rely on various areas of law to support their alleged claims regarding the use of AI:
- Contractual basis:Claims may be based on contract law as there are usually contractual relationships between the developer, operator and users of AI systems. However, this is the area where the contracting parties can protect their interests in the simplest and most nuanced way by using sensible contractual clauses.
- Product liability basis: Users and third parties may try to raise claims on the basis of product liability laws. A prerequisite would be that AI systems are considered ‘products’ and contain a ‘defect’.
- Tort law basis: Most legal systems provide for fault-based claims for damages (i.e. tort law) which claimants may try to also apply to AI systems.
- Regulatory basis: Claims may also be based on specific regulations, such as data protection or intellectual property (IP) law. For example, once personal data is processed by AI systems, data protection regulations (such as the GDPR in Europe) impose requirements that must be complied with. According to Art. 82 GDPR, any person who suffers pecuniary or non-pecuniary damage as a result of a breach of data protection requirements may bring non-contractual claims for such damage.
- Insurance: The increased use of AI systems opens the door to insurances for AI products. Just like many countries are requiring car owners to take out insurance, there are voices calling for manufacturers and professional operators of AI systems to be required to take out specialized liability insurance. Such insurance goes beyond the typical coverage of cyber insurance, which generally does not cover bodily harm, brand damage and damage to physical property. In addition to reducing costs, specialized AI insurance could further encourage the development of best practices for companies using AI, as insurers often impose requirements.
Open legal questions
Civil claims for damages typically require (1) some sort of breach of the law or a contract, (2) an element of fault, and (3) a causal link between the breach and the damage. The implementation and use of AI systems poses several open legal questions to potential claimants in this regard:
- Burden of proof: One of the main challenges for alleged damage caused by AI applications is the burden of proof. In general, the injured party bears the burden of proof. However, regulators and legal commentators take the view that the injured party often lacks the necessary insight or information into the AI system. Against this background, the EU, for example, is working on an AI liability directive which aims to make it easier for the injured party to prove that an AI system has breached the law by providing for rights to information and establishing a rebuttable presumption of a causal link between the fault, the functioning of the AI system and the damage. Similar to regulatory rights to information, some courts may also somewhat shift the burden of proof to the party of the dispute that possesses further information, such as the developer of the respective AI application.
- Attribution of fault: When claims are asserted in relation to AI systems, it is not always possible to attribute fault to a specific entity due to autonomous decision making, lack of knowledge of the potentially liable parties, and lack of subjective fault on the part of the AI application itself. Fault itself usually involves negligent or intentional behaviour, a concept that is not transferable to algorithm-based AI applications. As a solution, some propose either opening up the possibility of direct liability of the AI system by granting it legal personality (‘e-person’) or attributing the “fault” of the AI system to either the operator, the developer or the user of the AI system.
- Standard of care:Finally, there is a lively legal debate about the applicable standard of care when operating AI systems. In general, a different standard of care is proposed depending on the risk affinity and functionality of a particular AI system, e.g. whether it is used for private or business purposes. It is also debated whether the standard should be that of an imaginary human being (“human-machine-comparison“) or a “machine-specific” standard of care. Furthermore, some argue that developers of AI systems should be required to update their products according to the current state of science and technology, resulting in a relatively high standard of care.
Key considerations
In defending against claims arising out of the use of AI applications, businesses should consider the following:
- First, being able to show that the input (i.e. training material) was correct, appropriate for its intended purpose and not corrupted. This will allow the AI application to apply the correct principles to a new input.
- Second, the AI application needs to be programmed correctly. While this of course applies to the “core” of the AI system, this is also highly relevant at the interfaces between different AI systems.
- Third, the AI application needs be sufficiently supervised. So once correct input is inserted and the programming is correct, the AI application has to be supervised correctly to make sure that there are no grounds for allegations that the system is drawing incorrect or biased conclusions from statistical data.
- Fourth, it is also important that users of AI systems are acting in an ethical way and according to the instructions. It is difficult to foresee any and all ways in which users may misuse AI systems.
Avoiding disputes
Disputes concerning AI systems can arise on the basis of a variety of legal concepts and that – as with most things in life, all aspects of AI systems, i.e. their development, operation and use, can be subject to legal claims. To avoid disputes and be fully prepared if they arise, organizations should:
- Be informed: It is important to understand and always be aware that the development, operation and use of AI applications obviously does not happen in a space free from legal boundaries. In addition to regulatory law, in particular civil obligations, as discussed above should be kept in mind.
- Consider for all scenarios: Developers and operators of AI systems should contractually bind their customers to certain terms of use and clearly explain these rules in order to maximize their safeguards. Exclusions or limitations of liability can be a further element to navigate and reduce risks in contractual relationships.
- Risk mitigation starts at the beginning: When developing and training AI systems, sufficient testing and the selection of the training material are not only critical to the success of the AI application, they are key to risk mitigation.
- Plan ahead: Where possible, the work and output of AI systems should be logged to be able to defend oneself on the basis of these log files if a dispute arises.
- Stay alert: In any case, all individuals and organizations involved should closely monitor and evaluate the performance of AI systems at all times, keeping in mind that, by definition, it won’t be possible to monitor all parts of the process. Often, monitoring will be limited to monitoring the output of an AI system.
Specializing in AI-related legal matters
Bicak Law that specialize in legal matters related to AI focus on providing expertise and guidance on the legal implications, risks, and regulations surrounding AI technologies. We help businesses, organizations, and individuals navigate the complex legal landscape associated with AI. Here are some areas where we may deal with legal matters related to AI:
- AI Governance and Compliance: We can assist in developing and implementing frameworks and policies to ensure ethical and responsible AI use. We help clients comply with relevant laws, regulations, and standards, such as data protection and privacy laws.
- Intellectual Property (IP) Protection: AI can generate new inventions, algorithms, and creative works. We advise on IP protection strategies, patent applications, copyright issues, and trade secret protection related to AI technologies.
- Data Privacy and Security: AI relies on large amounts of data, which raises concerns regarding privacy and security. We help clients navigate data protection laws, advise on data sharing agreements, and develop policies to ensure compliance and mitigate potential risks.
- Liability and Risk Mitigation: AI systems can have significant impacts on individuals and society. We assist clients in assessing and mitigating legal risks associated with AI, such as potential liabilities for AI errors, bias, or discrimination.
- Regulatory Compliance: AI applications may be subject to specific industry regulations or standards. We help clients understand and comply with sector-specific regulations, such as healthcare, finance, or autonomous vehicles.
- Contracts and Licensing: We draft and negotiate contracts related to AI, including licensing agreements, data sharing agreements, technology development agreements, and service contracts.
- Ethical and Social Implications: AI raises complex ethical and societal questions. We provide guidance on AI ethics, including fairness, transparency, accountability, and the responsible use of AI technologies.
- Litigation and Dispute Resolution: In cases where legal disputes arise involving AI technologies, Bicak provide representation and counsel in litigation related to AI. This may include disputes over intellectual property rights, contract disputes, liability claims, or issues surrounding AI ethics and fairness. We leverage our AI expertise to navigate complex legal arguments and provide effective representation in court or alternative dispute resolution processes.
- Employment and Labor Law: AI technologies can impact the workforce, leading to questions about job displacement, workplace surveillance, and fair employment practices. We advise on employment law issues related to AI, including employee rights, privacy concerns, and potential discrimination.
- Regulatory and Government Affairs: We assist clients in navigating the evolving regulatory landscape for AI. We provide guidance on compliance with emerging AI-specific regulations, engage in advocacy efforts, and represent clients in interactions with regulatory bodies.
- Mergers and Acquisitions: In transactions involving AI companies or technologies, we conduct due diligence, evaluate IP and technology assets, negotiate contracts, and address regulatory issues related to the acquisition or merger.
- International and Cross-Border Issues: AI technologies often operate across borders, raising challenges related to data protection, jurisdiction, and international regulations. We navigate these complexities and advise on cross-border AI matters.
- Cybersecurity and Data Breach: Given the reliance on data, AI systems can be vulnerable to cybersecurity threats. We develop data breach response plans, advise on cybersecurity measures, and assist in legal proceedings following a data breach.
- Public Policy and Advocacy: We engage in public policy discussions and advocacy efforts related to AI regulation, ethics, and societal impact. We provide legal expertise to policymakers, industry associations, and advocacy groups.
- Technology Transactions: We assist clients in negotiating and drafting contracts related to AI technology development, software licensing, cloud services, and data sharing agreements.
It’s important to note that the specific services and expertise provided by Bicak specializing in AI-related legal matters may vary. We focus on niche areas, such as data privacy and security, as well as offering comprehensive AI legal services that cover a broad range of AI-related legal issues.
We welcome potancial clients seeking legal assistance in matters related to artificial intelligence. Please don’t hesitate to reach out to our law firm for expert guidance and support in navigating the complexities of AI-related legal issues.
Comments
No comments yet.