<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">vestnikuriu</journal-id><journal-title-group><journal-title xml:lang="ru">Северо-Кавказский юридический вестник</journal-title><trans-title-group xml:lang="en"><trans-title>North Caucasus Legal Vestnik</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">2074-7306</issn><issn pub-type="epub">2687-0304</issn><publisher><publisher-name>Южно-Российский институт управления РАНХиГС</publisher-name></publisher></journal-meta><article-meta><article-id custom-type="edn" pub-id-type="custom">VRZWPQ</article-id><article-id custom-type="elpub" pub-id-type="custom">vestnikuriu-247</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>ПРОБЛЕМЫ МЕЖДУНАРОДНОГО, ГРАЖДАНСКОГО И ПРЕДПРИНИМАТЕЛЬСКОГО ПРАВА</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>PROBLEMS OF INTERNATIONAL, CIVIL AND BUSINESS LAW</subject></subj-group></article-categories><title-group><article-title>Регулирование алгоритмической дискриминации в международном праве: структурная оценка и эффективность индивидуальных средств правовой защиты с точки зрения прав человека</article-title><trans-title-group xml:lang="en"><trans-title>Regulation of algorithmic discrimination in international law: structural assessment and effectiveness of individual legal remedies from a human rights perspective</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Алакбарзаде</surname><given-names>В. А. кызы</given-names></name><name name-style="western" xml:lang="en"><surname>Alakbarzade</surname><given-names>V. A.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Вадия Афган кызы Алакбарзаде - аспирант, юридический факультет, кафедра ЮНЕСКО прав человека и информационного права</p><p>Баку </p></bio><bio xml:lang="en"><p>Vadiya A. Alakbarzade - PhD student, Faculty of Law, UNESCO Department of Human Rights and Information Law </p><p>Baku </p></bio><email xlink:type="simple">vadiya.zadeh@gmail.com</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Бакинский государственный университет<country>Азербайджан</country></aff><aff xml:lang="en">Baku State University<country>Azerbaijan</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2025</year></pub-date><pub-date pub-type="epub"><day>26</day><month>02</month><year>2026</year></pub-date><volume>0</volume><issue>3</issue><fpage>88</fpage><lpage>98</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Алакбарзаде В., 2026</copyright-statement><copyright-year>2026</copyright-year><copyright-holder xml:lang="ru">Алакбарзаде В.</copyright-holder><copyright-holder xml:lang="en">Alakbarzade V.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://vestnik-uriu.ranepa.ru/jour/article/view/247">https://vestnik-uriu.ranepa.ru/jour/article/view/247</self-uri><abstract><sec><title>Введение</title><p>Введение. В связи с быстрым развитием технологий системы искусственного интеллекта (ИИ) широко применяются в различных социальных и правовых сферах: от найма сотрудников до вынесения судебных решений и оценки кредитоспособности. Несмотря на кажущуюся нейтральность этих технологий, они часто воспроизводят и усиливают существующие структурные предубеждения, что приводит к прямой или косвенной дискриминации определённых социальных групп.</p></sec><sec><title>Цель</title><p>Цель. Рассмотреть проблему алгоритмической дискриминации, возникающую в процессе принятия решений системами искусственного интеллекта, с точки зрения международного права и прав человека.</p></sec><sec><title>Теоретические основы</title><p>Теоретические основы. Статья раскрывает юридические и технические аспекты алгоритмической дискриминации. С юридической точки зрения акцент смещается с «намерения» на «последствие», что усложняет правовую оценку и распределение бремени доказывания. С технической стороны, многие алгоритмы действуют как «чёрный ящик», то есть процессы принятия решений непрозрачны, что затрудняет правовое вмешательство.</p></sec><sec><title>Результаты и выводы</title><p>Результаты и выводы. Автор утверждает, что действующая система международного права в области прав человека – включая Международный пакт о гражданских и политических правах (ICCPR) и Международную конвенцию о ликвидации всех форм расовой дискриминации (ICERD) – не способна эффективно регулировать проблему. Причины этого кроются как в технической сложности, так и в низком уровне обязательств государств и отсутствии конкретных правовых стандартов.</p><p>Существующие механизмы индивидуальной правовой защиты – Европейский суд по правам человека, договорные органы ООН и Межамериканский суд по правам человека – в настоящее время не обладают достаточными техническими знаниями и процессуальными возможностями для эффективного реагирования.</p><p>Отмечен относительно прогрессивный подход ЕС через Акт об ИИ и Общий регламент по защите данных (GDPR), в то время как в США и странах Глобального Юга сохраняются значительные пробелы. В завершение приводятся рекомендации по усилению международной правовой базы, адаптации национального законодательства, созданию специализированных институтов и обязательной оценке воздействия алгоритмов.</p></sec></abstract><trans-abstract xml:lang="en"><sec><title>Introduction</title><p>Introduction. Due to rapid technological development, artificial intelligence systems (AI) are increasingly used in various social and legal domains, such as recruitment, judicial decisions, and credit scoring. Although these technologies may appear neutral, the data they rely on and their learning processes often replicate and reinforce pre-existing structural biases, resulting in both direct and indirect discrimination against certain social groups.</p></sec><sec><title>Purpose</title><p>Purpose. To analyze the issue of algorithmic discrimination arising in the decision-making processes of AI systems from the perspective of international law and human rights.</p></sec><sec><title>Theoretical Basis</title><p>Theoretical Basis. The article explores the legal and technical dimensions of algorithmic discrimination. Legally, such discrimination does not align with traditional concepts, as the focus shifts from “intent” to “impact.” This complicates legal assessment and the burden of proof. Technically, many algorithms function as “black boxes,” meaning their decision-making processes are opaque, making legal intervention more difficult.</p></sec><sec><title>Results and Conclusions</title><p>Results and Conclusions. It is argued that the current international human rights framework – including instruments such as the ICCPR and ICERD – is insufficient to address this issue. These normative gaps are due not only to technical complexity but also to weak state compliance and the absence of specific legal standards.</p><p>Individual legal protection mechanisms – such as the European Court of Human Rights (ECtHR), UN Treaty Bodies, and the Inter-American Court of Human Rights – are currently ill-equipped to respond effectively due to limited technological expertise and procedural barriers.</p><p>The article highlights the European Union’s relatively advanced approach through instruments like the AI Act and GDPR, while pointing out the significant gaps that remain in the United States and Global South countries. The article concludes with recommendations, including strengthening the international legal framework, aligning national legislation, establishing specialized institutions, and mandating algorithmic impact assessment.</p></sec></trans-abstract><kwd-group xml:lang="ru"><kwd>алгоритмическая дискриминация</kwd><kwd>искусственный интеллект</kwd><kwd>проблема чёрного ящика</kwd><kwd>международное право</kwd><kwd>права человека</kwd><kwd>структурные предубеждения</kwd><kwd>индивидуальные правовые механизмы</kwd><kwd>Международный пакт о гражданских и политических правах</kwd><kwd>Международная конвенция о ликвидации всех форм расовой дискриминации</kwd><kwd>Акт об искусственном интеллекте</kwd><kwd>непропорциональное воздействие</kwd><kwd>оценка воздействия алгоритмов</kwd></kwd-group><kwd-group xml:lang="en"><kwd>Algorithmic discrimination</kwd><kwd>Artificial intelligence</kwd><kwd>Black box problem</kwd><kwd>International law</kwd><kwd>Human rights</kwd><kwd>Structural bias</kwd><kwd>Individual protection mechanisms</kwd><kwd>ICCPR</kwd><kwd>ICERD</kwd><kwd>GDPR</kwd><kwd>AI Act</kwd><kwd>Disproportionate impact</kwd><kwd>Algorithmic impact assessment</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Ma, Z. (2024, February). The Inadequacy of the Current International Human Rights Regime for Algorithm Discrimination. Michigan Journal of International Law. https://www.mjilonline.org/theinadequacy-of-the-current-international-human-rights-regime-for-algorithm-discrimination/</mixed-citation><mixed-citation xml:lang="en">Ma, Z. (2024, February). The Inadequacy of the Current International Human Rights Regime for Algorithm Discrimination. Michigan Journal of International Law. https://www.mjilonline.org/theinadequacy-of-the-current-international-human-rights-regime-for-algorithm-discrimination/</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Wójcik, M. A. (2022). Algorithmic Discrimination in Health Care: An EU Law Perspective. Health and Human Rights, 24(1), 93–103. https://pmc.ncbi.nlm.nih.gov/articles/PMC9212826/</mixed-citation><mixed-citation xml:lang="en">Wójcik, M. A. (2022). Algorithmic Discrimination in Health Care: An EU Law Perspective. Health and Human Rights, 24(1), 93–103. https://pmc.ncbi.nlm.nih.gov/articles/PMC9212826/</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Xenidis, R. (2023). When computers say no: towards a legal response to algorithmic discrimination in Europe. In: Research Handbook on Law and Technology. Edward Elgar Publishing. https://www.elgaronline.com/edcollchap/book/9781803921327/chapter14.xml</mixed-citation><mixed-citation xml:lang="en">Xenidis, R. (2023). When computers say no: towards a legal response to algorithmic discrimination in Europe. In: Research Handbook on Law and Technology. Edward Elgar Publishing. https://www.elgaronline.com/edcollchap/book/9781803921327/chapter14.xml</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Chen, X. (2024). Algorithmic proxy discrimination and its regulations. Computer Law &amp; Security Review. https://www.sciencedirect.com/science/article/abs/pii/S0267364924000876</mixed-citation><mixed-citation xml:lang="en">Chen, X. (2024). Algorithmic proxy discrimination and its regulations. Computer Law &amp; Security Review. https://www.sciencedirect.com/science/article/abs/pii/S0267364924000876</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Ma, Z. (2024, February). The Inadequacy of the Current International Human Rights Regime for Algorithm Discrimination. Michigan Journal of International Law. https://www.mjilonline.org/theinadequacy-of-the-current-international-human-rights-regime-for-algorithm-discrimination/</mixed-citation><mixed-citation xml:lang="en">Ma, Z. (2024, February). The Inadequacy of the Current International Human Rights Regime for Algorithm Discrimination. Michigan Journal of International Law. https://www.mjilonline.org/theinadequacy-of-the-current-international-human-rights-regime-for-algorithm-discrimination/</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Bains, C. (2024, September 13). The legal doctrine that will be key to preventing AI discrimination. Brookings. https://www.brookings.edu/articles/the-legal-doctrine-that-will-bekey-to-preventing-aidiscrimination/</mixed-citation><mixed-citation xml:lang="en">Bains, C. (2024, September 13). The legal doctrine that will be key to preventing AI discrimination. Brookings. https://www.brookings.edu/articles/the-legal-doctrine-that-will-bekey-to-preventing-aidiscrimination/</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Falletti, E. (n.d.). Algorithmic Discrimination and Privacy Protection. Law Journal. https://www.lawjournal.digital/jour/article/view/185</mixed-citation><mixed-citation xml:lang="en">Falletti, E. (n.d.). Algorithmic Discrimination and Privacy Protection. Law Journal. https://www.lawjournal.digital/jour/article/view/185</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Wachter, S., Mittelstadt, B., &amp; Russell, C. (2020). Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3547922</mixed-citation><mixed-citation xml:lang="en">Wachter, S., Mittelstadt, B., &amp; Russell, C. (2020). Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3547922</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">UNESCO. (n.d.). Recommendation on the Ethics of Artificial Intelligence. Retrieved from https://www.unesco.org/en/artificial-intelligence/recommendation-ethics</mixed-citation><mixed-citation xml:lang="en">UNESCO. (n.d.). Recommendation on the Ethics of Artificial Intelligence. Retrieved from https://www.unesco.org/en/artificial-intelligence/recommendation-ethics</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">OHCHR. (2021). The right to privacy in the digital age (focus on artificial intelligence), A/HRC/48/31. https://www.ohchr.org/en/calls-for-input/2021/right-privacy-digitalage-report-2021</mixed-citation><mixed-citation xml:lang="en">OHCHR. (2021). The right to privacy in the digital age (focus on artificial intelligence), A/HRC/48/31. https://www.ohchr.org/en/calls-for-input/2021/right-privacy-digitalage-report-2021</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">OHCHR. (2023). Report of the United Nations High Commissioner for Human Rights. https://docs.un.org/en/A/78/36</mixed-citation><mixed-citation xml:lang="en">OHCHR. (2023). Report of the United Nations High Commissioner for Human Rights. https://docs.un.org/en/A/78/36</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">European Court of Human Rights. (n.d.). Lopez Ribalda and Others v. Spain. Retrieved from https://hudoc.echr.coe.int/eng?i=001-197467</mixed-citation><mixed-citation xml:lang="en">European Court of Human Rights. (n.d.). Lopez Ribalda and Others v. Spain. Retrieved from https://hudoc.echr.coe.int/eng?i=001-197467</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Inter-American Commission on Human Rights. (2025, April 22). Artificial Intelligence and Human Rights: our contributions to.... https://www.tedic.org/en/ai_iachr2025/</mixed-citation><mixed-citation xml:lang="en">Inter-American Commission on Human Rights. (2025, April 22). Artificial Intelligence and Human Rights: our contributions to.... https://www.tedic.org/en/ai_iachr2025/</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">European Parliament. (2025, February 26). Algorithmic discrimination under the AI Act and the GDPR. https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)769509</mixed-citation><mixed-citation xml:lang="en">European Parliament. (2025, February 26). Algorithmic discrimination under the AI Act and the GDPR. https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)769509</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">Employment Law Watch. (2024, August 22). Employers beware: AI-based workplace discrimination laws are coming to the U.S. https://www.employmentlawwatch.com/2024/08/articles/employmentus/employers-beware-aibased-workplace-discrimination-laws-are-coming-to-the-u-s/</mixed-citation><mixed-citation xml:lang="en">Employment Law Watch. (2024, August 22). Employers beware: AI-based workplace discrimination laws are coming to the U.S. https://www.employmentlawwatch.com/2024/08/articles/employmentus/employers-beware-aibased-workplace-discrimination-laws-are-coming-to-the-u-s/</mixed-citation></citation-alternatives></ref><ref id="cit16"><label>16</label><citation-alternatives><mixed-citation xml:lang="ru">Institute for Global Change. (2025, February 6). How Leaders in the Global South Can Devise AI Regulation That Enables Innovation. https://institute.global/insights/tech-anddigitalisation/how-leadersin-the-global-south-can-devise-ai-regulation-that-enables-innovation</mixed-citation><mixed-citation xml:lang="en">Institute for Global Change. (2025, February 6). How Leaders in the Global South Can Devise AI Regulation That Enables Innovation. https://institute.global/insights/tech-anddigitalisation/how-leadersin-the-global-south-can-devise-ai-regulation-that-enables-innovation</mixed-citation></citation-alternatives></ref><ref id="cit17"><label>17</label><citation-alternatives><mixed-citation xml:lang="ru">Working Party 29. (n.d.). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. https://ec.europa.eu/newsroom/article29/items/612112/en</mixed-citation><mixed-citation xml:lang="en">Working Party 29. (n.d.). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. https://ec.europa.eu/newsroom/article29/items/612112/en</mixed-citation></citation-alternatives></ref><ref id="cit18"><label>18</label><citation-alternatives><mixed-citation xml:lang="ru">European Commission. (2021, April 21). Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act). https://eurlex.europa.eu/legalcontent/EN/TXT/?uri=CELEX%3A52021PC0206</mixed-citation><mixed-citation xml:lang="en">European Commission. (2021, April 21). Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act). https://eurlex.europa.eu/legalcontent/EN/TXT/?uri=CELEX%3A52021PC0206</mixed-citation></citation-alternatives></ref><ref id="cit19"><label>19</label><citation-alternatives><mixed-citation xml:lang="ru">Falletti, E. (n.d.). Algorithmic Discrimination and Privacy Protection. Law Journal. https://www.lawjournal.digital/jour/article/view/185</mixed-citation><mixed-citation xml:lang="en">Falletti, E. (n.d.). Algorithmic Discrimination and Privacy Protection. Law Journal. https://www.lawjournal.digital/jour/article/view/185</mixed-citation></citation-alternatives></ref><ref id="cit20"><label>20</label><citation-alternatives><mixed-citation xml:lang="ru">International Ombuds Association. (n.d.). Organizational Ombuds and Artificial Intelligence. https://www.ombudsassociation.org/index.php?option=com_dailyplanetblog&amp;view=entry&amp;year=2024&amp;month=05&amp;day=15&amp;id=300:organiznationalombuds-and-artificial-intelligence</mixed-citation><mixed-citation xml:lang="en">International Ombuds Association. (n.d.). Organizational Ombuds and Artificial Intelligence. https://www.ombudsassociation.org/index.php?option=com_dailyplanetblog&amp;view=entry&amp;year=2024&amp;month=05&amp;day=15&amp;id=300:organiznationalombuds-and-artificial-intelligence</mixed-citation></citation-alternatives></ref><ref id="cit21"><label>21</label><citation-alternatives><mixed-citation xml:lang="ru">Council of Europe. (n.d.). CAHAI - Ad hoc Committee on Artificial Intelligence. https://www.coe.int/en/web/artificial-intelligence/cahai</mixed-citation><mixed-citation xml:lang="en">Council of Europe. (n.d.). CAHAI - Ad hoc Committee on Artificial Intelligence. https://www.coe.int/en/web/artificial-intelligence/cahai</mixed-citation></citation-alternatives></ref><ref id="cit22"><label>22</label><citation-alternatives><mixed-citation xml:lang="ru">Access Now. (n.d.). Algorithmic Impact Assessments. https://www.accessnow.org/cms/assets/uploads/2020/06/Algorithmic-Impact-AssessmentsAccess-Now.pdf</mixed-citation><mixed-citation xml:lang="en">Access Now. (n.d.). Algorithmic Impact Assessments. https://www.accessnow.org/cms/assets/uploads/2020/06/Algorithmic-Impact-AssessmentsAccess-Now.pdf</mixed-citation></citation-alternatives></ref><ref id="cit23"><label>23</label><citation-alternatives><mixed-citation xml:lang="ru">Big Brother Watch and Others v. UK, App. Nos. 58170/13, 62322/14, 24960/15, ECHR 2021</mixed-citation><mixed-citation xml:lang="en">Big Brother Watch and Others v. UK, App. Nos. 58170/13, 62322/14, 24960/15, ECHR 2021</mixed-citation></citation-alternatives></ref><ref id="cit24"><label>24</label><citation-alternatives><mixed-citation xml:lang="ru">EU GDPR, Article 22 and Recital 71.</mixed-citation><mixed-citation xml:lang="en">EU GDPR, Article 22 and Recital 71.</mixed-citation></citation-alternatives></ref><ref id="cit25"><label>25</label><citation-alternatives><mixed-citation xml:lang="ru">Bathaee, Y. (2018). The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law &amp; Technology, 31(2), 889-931.</mixed-citation><mixed-citation xml:lang="en">Bathaee, Y. (2018). The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law &amp; Technology, 31(2), 889-931.</mixed-citation></citation-alternatives></ref><ref id="cit26"><label>26</label><citation-alternatives><mixed-citation xml:lang="ru">Metikoš, L., &amp; Ausloos, J. (2025). The Right to an Explanation in Practice: Insights from Case Law for the GDPR and the AI Act. Law, Innovation and Technology.</mixed-citation><mixed-citation xml:lang="en">Metikoš, L., &amp; Ausloos, J. (2025). The Right to an Explanation in Practice: Insights from Case Law for the GDPR and the AI Act. Law, Innovation and Technology.</mixed-citation></citation-alternatives></ref><ref id="cit27"><label>27</label><citation-alternatives><mixed-citation xml:lang="ru">UN Committee on the Elimination of Racial Discrimination (CERD). (2020). General Recommendation No. 36 on preventing and combating racial profiling by law enforcement officials. (CERD/C/GC/36).</mixed-citation><mixed-citation xml:lang="en">UN Committee on the Elimination of Racial Discrimination (CERD). (2020). General Recommendation No. 36 on preventing and combating racial profiling by law enforcement officials. (CERD/C/GC/36).</mixed-citation></citation-alternatives></ref><ref id="cit28"><label>28</label><citation-alternatives><mixed-citation xml:lang="ru">UN Human Rights Committee (HRC). (2020). General Comment No. 37 on the right to peaceful assembly (Article 21). (CCPR/C/GC/37).</mixed-citation><mixed-citation xml:lang="en">UN Human Rights Committee (HRC). (2020). General Comment No. 37 on the right to peaceful assembly (Article 21). (CCPR/C/GC/37).</mixed-citation></citation-alternatives></ref><ref id="cit29"><label>29</label><citation-alternatives><mixed-citation xml:lang="ru">Balcerzak, M. (2024). Implications of the United Nations human rights standards for the development of artificial intelligence. In: Artificial Intelligence and International Human Rights Law (pp. 147- 168). Edward Elgar Publishing.</mixed-citation><mixed-citation xml:lang="en">Balcerzak, M. (2024). Implications of the United Nations human rights standards for the development of artificial intelligence. In: Artificial Intelligence and International Human Rights Law (pp. 147- 168). Edward Elgar Publishing.</mixed-citation></citation-alternatives></ref><ref id="cit30"><label>30</label><citation-alternatives><mixed-citation xml:lang="ru">Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4, 100005.</mixed-citation><mixed-citation xml:lang="en">Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4, 100005.</mixed-citation></citation-alternatives></ref><ref id="cit31"><label>31</label><citation-alternatives><mixed-citation xml:lang="ru">Smuha, N. A. (2021). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy &amp; Technology, 34, 151-177.</mixed-citation><mixed-citation xml:lang="en">Smuha, N. A. (2021). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy &amp; Technology, 34, 151-177.</mixed-citation></citation-alternatives></ref><ref id="cit32"><label>32</label><citation-alternatives><mixed-citation xml:lang="ru">UN OHCHR. (2021). The Right to Privacy in the Digital Age. (A/HRC/48/31).</mixed-citation><mixed-citation xml:lang="en">UN OHCHR. (2021). The Right to Privacy in the Digital Age. (A/HRC/48/31).</mixed-citation></citation-alternatives></ref><ref id="cit33"><label>33</label><citation-alternatives><mixed-citation xml:lang="ru">Chesterman, S. (2020). Artificial intelligence and the limits of legal personality. International &amp; Comparative Law Quarterly, 69(4), 833-852.</mixed-citation><mixed-citation xml:lang="en">Chesterman, S. (2020). Artificial intelligence and the limits of legal personality. International &amp; Comparative Law Quarterly, 69(4), 833-852.</mixed-citation></citation-alternatives></ref><ref id="cit34"><label>34</label><citation-alternatives><mixed-citation xml:lang="ru">Felder, R. M. (2021). Coming to terms with the black box problem: how to justify AI systems in health care. Hastings Center Report, 51(3), 34-42.</mixed-citation><mixed-citation xml:lang="en">Felder, R. M. (2021). Coming to terms with the black box problem: how to justify AI systems in health care. Hastings Center Report, 51(3), 34-42.</mixed-citation></citation-alternatives></ref><ref id="cit35"><label>35</label><citation-alternatives><mixed-citation xml:lang="ru">Turner, J. (2018). Legal personality for AI. In: Robot Rules: Regulating Artificial Intelligence (pp. 69- 82). Springer.</mixed-citation><mixed-citation xml:lang="en">Turner, J. (2018). Legal personality for AI. In: Robot Rules: Regulating Artificial Intelligence (pp. 69- 82). Springer.</mixed-citation></citation-alternatives></ref><ref id="cit36"><label>36</label><citation-alternatives><mixed-citation xml:lang="ru">OECD. (2022). OECD Framework for the Classification of AI Systems. OECD Publishing.</mixed-citation><mixed-citation xml:lang="en">OECD. (2022). OECD Framework for the Classification of AI Systems. OECD Publishing.</mixed-citation></citation-alternatives></ref><ref id="cit37"><label>37</label><citation-alternatives><mixed-citation xml:lang="ru">Treasury Board of Canada Secretariat (TBS). (2020). Algorithmic Impact Assessment. Government of Canada.</mixed-citation><mixed-citation xml:lang="en">Treasury Board of Canada Secretariat (TBS). (2020). Algorithmic Impact Assessment. Government of Canada.</mixed-citation></citation-alternatives></ref><ref id="cit38"><label>38</label><citation-alternatives><mixed-citation xml:lang="ru">Ada Lovelace Institute. (2022). Algorithmic impact assessment: user guide.</mixed-citation><mixed-citation xml:lang="en">Ada Lovelace Institute. (2022). Algorithmic impact assessment: user guide.</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
