Save the file

technological innovation vs. basic rights of migrants within the AI ​​Act – Official Weblog of UNIO – Melissas Meals Freedom

Maria Clara Pina (grasp’s scholar in Human Rights on the College of Regulation of the College of Minho)

I.

Presently, within the so-called period of techno-solutionism,[1] digital applied sciences, together with Synthetic Intelligence (AI), have change into broadly used.[2] We’re witnessing the rising however quickly evolving phenomenon of border administration and management by way of the usage of new applied sciences[3] and automatic particular person decision-making (Article 22 of the Normal Knowledge Safety Regulation, henceforth “GDPR”),[4] which make use of AI, and promise sooner and extra environment friendly selections. Nonetheless, these methods have the potential to hurt human rights. Migration is changing into a transaction that requires migrants to trade biometric and biographical knowledge for entry to assets or a jurisdiction – and to be seen as individuals[5] with inherent rights and dignity.

On the identical time, the variety of migrants within the European Union (EU)[6] is rising, making it worthwhile to analyse the influence of those applied sciences and their regulation (or lack thereof), given their inevitable and fast evolution, however, above all, the fixed character of the migratory phenomenon over time, and the vulnerability inherent to the standing of migrant. On this context, advanced authorized challenges come up, requiring the evaluation of the EU regulatory framework on the usage of AI within the context of border administration, asylum and migration, contemplating the primary gaps throughout the AI ​​Act[7] and its far-reaching implications on the human rights of migrants.

II.

The AI ​​Act stands as the primary complete regulatory instrument on AI, positioning the EU on the forefront of worldwide AI governance.[8] Its emergence is intently linked to fast technological advances, enhanced by the progress of machine studying, the power to coach algorithms, and the provision of in depth databases. This instrument is an integral a part of the European Digital Technique,[9] aiming at digital innovation and the event and implementation of applied sciences that enhance every day life – which displays a trust-based and human-centred method.

Furthermore, the Regulation paves the best way for AI to be positioned on the service of human progress,[10] guaranteeing extra safety of basic rights, such because the safety of privateness and private knowledge, asylum, non-refoulement, non-discrimination and efficient judicial safety [Articles 7, 8, 14, 19, 21 and 47 of the Charter of Fundamental Rights of the European Union, henceforth “CFREU”; Recital 6 and Article 1(1) of the AI ​​Act], whereas prohibiting the usage of AI methods to bypass worldwide obligations arising from the 1951 Geneva Conference[11] and the 1967 Protocol.[12]

The AI ​​Act follows a proportionate risk-based method (Recital 26 of the AI ​​Act), imposing a gradual scheme of restrictions and obligations on suppliers and customers of AI[13] methods (Article 2 of the AI ​​Act), relying on the chance that their utility entails for well being, security or basic rights.[14]

III.

AI methods posing unacceptable dangers are prohibited (Recital 28 of the AI Act). Nonetheless, such prohibition is just not absolute (Article 5 of the AI ​​Act), because it permits controversial exceptions which have fuelled intense political debates.[15]

Accordingly, assessments of pure individuals meant to judge or predict the chance of an individual committing against the law, based mostly solely on their profile [Article 3(52) of the AI Act and Article 4(4) GDPR] or on the evaluation of their traits and character are prohibited [Article 5(1)(d) of the AI Act]. This prohibition aligns with the presumption of innocence (Recital 42) and is related within the context of the Visa Code,[16] the place entry or the granting of a visa to a third-country nationwide could also be denied if that individual is taken into account to be a menace to the general public order or inner safety [Article 32(1)(a)(vi) of the same Code].[17] Whereas profiling is exceptionally permitted by the GDPR, the usage of AI for this objective is prohibited. An individual ought to solely be thought of a suspect of against the law if such suspicion is predicated on a human evaluation of goal information (Recital 42).[18]

Moreover, AI methods for creating or increasing facial recognition databases by randomly accumulating pictures from the Web or CCTV footage are prohibited [Article 5(1)(e) of the AI Act], defending privateness and stopping mass surveillance.[19]

Biometric categorisation methods for people (Recital 16 of the AI Act) based mostly on their biometric knowledge [Recital 14 and Article 3(34) of the AI Act] to infer or infer their race, political views, commerce union membership, non secular or philosophical beliefs, sexual life or orientation are additionally prohibited [Recital 30 and Article 5(1)(g) of the AI Act], apart from the labelling and filtering of lawfully acquired biometric knowledge units within the area of legislation enforcement. This distinction is especially related as EU Member States are more and more utilizing expertise to check the security and identities of asylum seekers.[20]

Lastly, real-time distant biometric identification methods [Article 3(41) of the AI Act] in publicly accessible areas are prohibited [Article 5(1)(h) of the AI ​​Act], besides when deemed crucial for particular functions outlined within the Regulation [Article 5(2) of the AI ​​Act].

It needs to be famous that the checklist of prohibited AI makes use of and methods is just not exhaustive, and these practices could also be prohibited by different authorized devices. Specifically, the restricted prohibition of choices based mostly solely on the automated processing of private knowledge (Article 22 of the GDPR) needs to be highlighted as a prohibition that has relevance within the context of the usage of AI methods however is exterior the scope of the AI ​​Act. As well as, basic prohibitions such because the prohibition of discrimination apply hand in hand the AI Act.[21]

IV.

Excessive-risk AI methods (Recital 46 and Article 6 of the AI Act) although not prohibited, can pose critical dangers to well being, security or basic rights of people within the Union (Recital 48 of the AI Act) and are meant for use by, or on behalf of, competent authorities, EU establishments, our bodies, workplaces or businesses. This consists of the AI methods (Recital 52 of the AI ​​Act) listed in Annex III(7) of the AI ​​Act [Article 6(1) of the AI Act].

Within the space of ​​migration, asylum and border management, guaranteeing accuracy, transparency and non-discrimination in decision-making is important. As acknowledged within the Regulation, AI methods deployed on this area have an effect on people who’re in a very susceptible place and who’re depending on the end result of the actions of the competent public authorities (Recital 60 of the AI Act).

Such methods embrace polygraphs or comparable instruments [Annex III(7)(a) of the AI Act] and methods to evaluate the safety, irregular migration or well being danger of an individual who has entered or needs to enter the EU territory [Annex III, (7)(b) of the AI Act]. Moreover, methods to help the competent public authorities in analysing functions for asylum, visas and residence permits and associated complaints relating to the eligibility of people making use of for a sure standing – together with associated assessments of the reliability of proof – are additionally thought of high-risk [Annex III(7)(c) of the AI Act].

Lastly, AI methods for the aim of detection, recognition or identification of pure individuals aside from the verification of journey paperwork fall throughout the high-risk class [Annex III(7)(d) of the AI Act]. On this context, Computerized Border Management methods are employed to match the facial options of the individual in search of entry with the {photograph} saved on the id doc, in addition to with biometric knowledge saved in large-scale knowledge methods. Whereas these methods don’t but totally combine AI, a Frontex[22] report suggests exploring its potential to detect threats equivalent to morphing assaults.[23]

Given the attainable critical implications of those AI methods, they need to adjust to stricter necessities, equivalent to danger administration (Article 9 of the AI Act), transparency (Article 13 of the AI Act), human oversight (Article 14 of the AI Act), cybersecurity, accuracy and robustness (Article 15 of the AI Act), knowledge high quality and governance, in addition to coaching and testing of the methods (Article 10 of the AI Act), registration in an EU database (Articles 49 et seq. of the AI Act), and a previous evaluation of the influence on basic rights (Article 27 of the AI Act).

Along with the obligations imposed, the AI Act safeguards particular person rights for individuals affected by such methods. Notably, it ensures the suitable to clear and significant explanations concerning the position of the AI ​​system within the decision-making course of and the primary parts of the choice (Article 86 of the AI Act). This serves as a assure of the suitable to efficient judicial safety (Article 47 CFREU), reaffirmed by the Court docket of Justice of the European Union (CJEU) within the Ligue des Droits Humains case,[24] which addressed the automated danger evaluation based mostly on the PNR system.[25] The CJEU strengthened the necessity for transparency and entry to info, having decided that the affected individual should be capable of perceive how the choice standards and packages used function, as a way to determine, with full information of the related information, whether or not to contest their illegal or discriminatory nature.[26]

V.

Though the AI ​​Act represents a major step, we however argue that it falls brief in sure areas, with potential detrimental impacts on the human rights of migrants. The truth is, the checklist of prohibited AI methods appears removed from full. Some high-risk methods, as a result of unacceptable dangers that come up from them, needs to be prohibited. Then again, the checklist of high-risk methods additionally seems to be incomplete, and issues persist in guaranteeing transparency and human supervision within the migration sphere.

We due to this fact argue that each the checklist of prohibited AI methods and Annex III of the AI ​​Act needs to be amended, underneath Articles 7, 97 and 112 of the AI Act, which permit the Fee to replace the checklist in keeping with technological developments and social wants, guaranteeing that laws stays aligned with technological advances and societal expectations. We’ll deal with a few of the limitations, gaps and exceptions of the Regulation within the following sections.

VI.

Giant-scale IT methods (Annex X of the AI Act) had been initially constructed for extra restricted functions, however over time and thru numerous legislative adjustments, their functions have expanded. These methods have change into more and more targeted on border management and work together inside a framework of interoperability,[27]/[28] which results in private knowledge being broadly shared between methods, authorities departments and States within the EU’s built-in border administration.[29] The final word goal of those methods is to safeguard and promote the EU’s basic goals of enhancing safety, facilitating cooperation and selling the free motion of individuals between Member States.

An instance of automated danger assessments, algorithmic profiling of third-country nationals, and interoperability of methods is the European Journey Data and Authorisation System (ETIAS),[30] which is predicted to change into operational by mid-2025,[31] and requires pre-screening to find out whether or not travellers pose dangers of safety, irregular migration or well being. On this system, functions will endure background checks towards knowledge already current in methods equivalent to SIS, VIS, Eurodac, EES, ECRIS-TCN and ETIAS itself, Europol and sure Interpol databases. As well as, sure private knowledge will probably be in contrast with screening guidelines developed by Frontex, enabling the profiling of third-country nationals [Article 4(4) of the GDPR] based mostly on danger indicators. If the comparability triggers an alert, the request should be processed manually by the ETIAS Nationwide Unit of the accountable Member State (Articles 21 and 22 of the ETIAS Regulation).[32]

To this point, this technique doesn’t contain AI throughout the that means of Article 3 of the AI Act, however a report by eu-LISA[33] suggests utilizing AI methods to detect suspicious requests. It needs to be famous that for these large-scale methods, the necessities of the AI ​​Act, with out prejudice to the appliance of Article 5, will solely come into impact in 2030 (Article 111 of the AI ​​Act). This hole, though comprehensible, for the reason that interoperability structure remains to be underneath building, might be problematic as a result of lack of transparency and broad leeway for the usage of AI methods on this context,[34] additional threatening the elemental rights of migrants.

This sort of system raises considerations relating to the rights to privateness and knowledge safety (Article 8 of the CFREU). Giant quantities of private knowledge are collected, saved, cross-referenced and analysed, which inspires the continual assortment and examination of private knowledge in automated danger evaluation methods. This encompasses various kinds of knowledge, equivalent to social media exercise, monetary transactions and site info.[35] These practices could change into intrusive, and, consequently, should be guided by transparency and cling to well-defined, reliable functions [Article 5(1)(a) of the GDPR].

Moreover, generally neither the developer nor the consumer totally perceive the explanations that result in sure outcomes.[36] Even when the reasoning behind particular outcomes is obvious to these growing the system, this doesn’t essentially guarantee the extent transparency required for migrants, probably compromising their proper to efficient judicial safety (Article 47 of the CFREU). The truth is, in these instances, criticism mechanisms will probably be inadequate to guard particular person rights, since those that do not need full entry to the underlying knowledge and logic of the system will probably be unable to contest it.

Moreover, a widely known downside related to AI methods and automatic danger evaluation is the potential for perpetuating or reproducing discrimination (Article 21 of the CFREU). As an illustration, a profiling system based mostly on variables equivalent to nationality, gender and age, is used to calculate the rating of short-stay visa candidates wishing to enter the Netherlands and the Schengen space. If the system classifies the applicant as high-risk, authorities will additional examine them, usually leading to delays, and discriminatory bias. The truth is, acquiring visas for relations of Dutch, Moroccan and Surinamese residents has confirmed to be troublesome.[37] It’s clear that if these methods are skilled with historic knowledge, equivalent to non-automated selections made by brokers in visa procedures to acknowledge potential irregular immigrants, there’s a danger of reproducing the discrimination underlying these human selections, which are sometimes based mostly on ethnic and racial profiling. Lastly, these methods could make errors,[38] which might result in unfair discrimination, culminating within the undue denial of entry or the inaccurate danger classification of the migrant.

Contemplating the assorted dangers outlined and the continued evolutionary pattern, we consider that the prohibition of automated danger evaluation shouldn’t have been restricted to instances involving the prediction of the chance of a person to commit a prison offense. The truth is, though the automated evaluation of dangers to safety, irregular migration or well being, within the context of migration, is taken into account high-risk and should meet sure necessities, it’s regarding that such practices are permitted in a context characterised by deep-rooted ethnic, racial and gender discrimination and the heightened vulnerability of migrants.

VII.

Though seemingly much less intrusive on basic rights, the checklist of high-risk methods ought to have included these AI methods geared toward predicting migration tendencies and border crossings. For instance, the European Asylum Help Workplace (EASO) – since 2022, EUAA[39] – developed the Early Warning and Preparedness System, designed to forecast migration flows into EU territories. This method depends on knowledge sources equivalent to GDELT (info on occasions by nation of origin), Google Traits (weekly on-line search tendencies by nation of origin), Frontex (month-to-month detections of irregular border crossings) and inner knowledge on the variety of asylum functions and recognition charges in EU Member States. The algorithm seeks to anticipate which occasions will trigger large-scale displacement and estimate the following variety of asylum functions within the EU.[40]

On the one hand, by predicting the arrival of migrants, these methods can lead to environment friendly preparation for the arrival of individuals and permit reallocation of assets based on reception wants. Then again, they’ll facilitate preventive responses to thwart migratory motion by way of measures to impede the entry of migrants and asylum seekers to the territory of a State.[41]

Non-entry insurance policies embody visa checks, provider sanctions, the institution of worldwide zones, and maritime interceptions on the excessive seas, and AI applied sciences might be central to every of those insurance policies. Nonetheless, this creates room for the reinforcement of unlawful non-refoulement practices, equivalent to by way of particular maritime interventions geared toward returning migrants and asylum seekers to locations the place they might concern for his or her lives or freedom, with out giving them the possibility to even apply for asylum. AI runs the chance of changing into one more political instrument, used to bolster outdated state practices geared toward containing worldwide migration and stopping asylum seekers from reaching their territories.[42] Consequently, these methods should be topic to strict regulation.

VIII.

As beforehand talked about, the prohibition of real-time distant biometric identification methods has exceptions. That is permitted when crucial for the seek for victims of kidnapping, human trafficking, sexual exploitation, lacking individuals, the prevention of threats to the life or bodily security of pure individuals and terrorist threats, or for the localisation and identification of an individual suspected of a prison offense [Articles 5(1)(h) and (2) of the AI ​​Act]. On condition that violations of immigration legislation are broadly handled as prison offences, and that people accessing the EU could also be victims of trafficking or have their lives in danger, any of those exceptions could possibly be (mis)used to justify mass biometric surveillance of third-country nationals. The usage of these methods requires prior authorisation by an impartial judicial or administrative authority [Article 5(3) AI of the Act], which is a vital safeguard. nevertheless, it’s nonetheless unclear which authorities could also be concerned.[43]

Article 14 of the AI Act establishes that high-risk methods should be overseen by not less than two pure individuals. This goals to ensure that AI methods are evaluated impartially and responsibly, guaranteeing the overview of automated selections and avoiding biases and injustices. But, in paragraph 5, supervision by not less than two pure individuals for the needs of migration, border management or asylum shall not apply when its utility is disproportionate – with out, nevertheless, clearly explaining which standards or conflicting pursuits justify such an exception. This supervision, which is especially vital in delicate areas equivalent to migration and asylum, is important to make sure the efficient safety of basic rights. The dearth of a transparent justification for the exception creates a regulatory vacuum that may be exploited in an abusive method, permitting the appliance of generally incorrect automated selections with out due human accountability. The dearth of ample supervision in these areas can result in unfair and discriminatory selections, which critically have an effect on the lives of migrants and refugees, with out room for contestation (Article 47 of the CFREU).

Lastly, high-risk methods should be registered within the EU database (Article 71 of the AI Act). Nonetheless, within the space of ​​migration and border administration there’s an exemption from public registration [Article 49(4) AI Act] and from publishing a abstract of the AI ​​undertaking developed [Article 59(1)(j) of the AI Act]. Whereas this resolution displays a security-focused method, it will increase the already alarming opacity surrounding the usage of AI in migration, stopping public scrutiny and ample monitoring of the impacts of those methods on the lives of migrants.

IX.

Regardless of its limitations, the AI ​​Act affords a novel alternative to advance moral and inclusive regulation of AI. Due to this fact, a coordinated effort is important to make sure that the inevitable technological innovation doesn’t come on the expense of basic rights. Stricter measures are beneficial, such because the prohibition of intrusive and scientifically unfounded methods, the enlargement of high-risk classes, in addition to guaranteeing human oversight and transparency of methods and selections taken based mostly on them. The influence of those applied sciences on the lives of migrants requires that the usage of AI methods be guided by the safety of basic rights, as a way to construct a really truthful and inclusive European migration system and make sure the non-replication of structural biases. This can be a problem that the EU can not ignore, particularly at a time when the steadiness between safety, technological innovation and basic rights has by no means been extra related.


[1] Niovi Vavoula, “Synthetic Intelligence (AI) at Schengen borders: automated processing, algorithmic profiling and facial recognition within the period of techno-solutionism”, European Journal of Migration and Regulation (2021), accessed on January 26, 2025, https://ssrn.com/summary=3950389.

[2] A. Beduschi and M. McAullife, “Synthetic intelligence, migration and mobility: implications for coverage and apply”, in World Migration Report, eds. M. McAuliffe and A. Triandafyllidou [Geneva: International Organization for Migration (IOM), 2022], accessed January 19, 2025, https://www.publications.iom.int.

[3] Jane Kilpatrick and Chris Jones, A transparent and current hazard: Lacking safeguards on migration and asylum within the EU’s AI Act (2022), 4, accessed January 26, 2025, https://www.statewatch.org.

[4] See Regulation (EU) 2016/679 of the EP and of the Council on the safety of pure individuals with regard to the processing of private knowledge and on the free motion of such knowledge.

[5] Lucia Nalbandian, “A watch for an “I”: a essential evaluation of synthetic intelligence instruments in migration and asylum administration”, Comparative Migration Research, v. 10, no. 32 (2022), accessed January 26, 2025, https://comparativemigrationstudies.springer.com.

[6] EUAA – European Union Company for Asylum, Asylum Report 2023 (2023), 20, accessed January 26, 2025, doi: 10.2847/82162.

[7] See Regulation (EU) 2024/1689 of the EP and of the Council of 13 June 2024 laying down harmonised guidelines on AI.

[8] European Parliament, “Lei da UE sobre IA: primeira regulamentação de inteligência synthetic”, 2023, accessed January 26, 2025, https://www.europarl.europa.eu/matters/pt/article/20230601STO93804/lei-da-ue-sobre-ia-primeira-regulamentacao-de-inteligencia-artificial. European Fee, “AI Act”, accessed January 26, 2025, https://digital-strategy.ec.europa.eu.

[9] European Fee, “Shaping Europe’s digital future”, accessed January 26, 2025, https://fee.europa.eu.

[10] Inga Ulnicane, “Synthetic intelligence within the European Union: coverage, ethics and regulation”, in The Routledge Handbook of European Integrations, eds. T. Hoerber, I. Cabras and G. Weber (London: Routledge, 2022), 259, doi: 10.4324/9780429262081-19.

[11] Adopted in 1951, accessible at https://dcjri.ministeriopublico.pt.

[12] Signed on January 31, 1967 and in pressure since 1967, accessible at https://dcjri.ministeriopublico.pt.

[13] Machine-based system designed to function with various ranges of autonomy, and which can exhibit adaptability after deployment and which, for specific or implicit functions, and based mostly on enter knowledge it receives, infers how you can generate outcomes, equivalent to predictions, content material, suggestions or selections that will affect bodily or digital environments [Article 3(1) of the AI Act].

[14] Alessandra Silveira and Maria Inês Costa, “Regulating Synthetic Intelligence (AI): on the civilisational selection we’re all making”, UNIO – The Official Weblog, July 17, 2023, accessed January 26, 2025, https://officialblogofunio.com/2023/07/17/editorial-of-july-2023/.

[15] Luca Bertuzzi, “AI Act: EU Parliament’s dialogue warmth up over facial recognition, scope”, Euractiv, 2022, accessed January 20, 2025, https://www.euractiv.com. Luca Bertuzzi, “AI Act: EU policymakers nail down guidelines on AI fashions, butt heads on legislation enforcement”, Euractiv, 2023, accessed January 20, 2025, https://www.euractiv.com.

[16] See Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Group Code on Visas (Visa Code).

[17] Evelien Brouwer, “EU’s AI Act and migration management. Shortcomings in safeguarding basic rights”, VerfBlog, 2024, accessed January 26, 2025, https://dx.doi.org/10.59704/a4de76df20e0de5a.

[18] Paul Voigt and Nils Hullen, The EU AI Act: Solutions to steadily requested questions (Berlin. Springer, 2024), 42, doi: 10.1007/978-3-662-70201-7.

[19] Voigt and Hullen, The EU AI Act, 42.

[20] Brouwer, “EU’s AI Act”.

[21] Voigt and Hullen, The EU AI Act, 38.

[22] Frontex, Synthetic Intelligence-based capabilities for the European border and coast guard: last report (2021), 28-29, accessed January 26, 2025, https://www.frontex.europa.eu.

[23] Applied sciences by which a traveller deliberately makes an attempt to be misidentified or misclassified by the biometric recognition system.

[24] Judgment CJEU Ligue des droits humains, 21 June 2022, Case C-817/19.

[25] See Directive (EU) 2016/681 of the European Parliament and of the Council of 27 April 2016 on the usage of passenger title document (PNR) knowledge for the prevention, detection, investigation and prosecution of terrorist offences and critical crime.

[26] Brouwer, “The EU’s AI Act”.

[27] See Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 Could 2019 on establishing a framework for interoperability between EU info methods within the area of borders and visa and Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 Could 2019 on establishing a framework for interoperability between EU info methods within the area of police and judicial cooperation, asylum and migration.

[28] European Fee, “Overview of knowledge administration within the space of freedom, safety and justice”, October 20, 2010, accessed January 26, 2025, https://eur-lex.europa.eu.

[29] Yiran Yang et al., “Automated Choice-making and Synthetic Intelligence at European Borders and Their Dangers for Human Rights”, SSRN, Working Draft (2024): 15, doi: 10.2139/ssrn.4790619.

[30] This method is proof of the paradigm shift in the direction of the aforementioned techno-solutionism, putting belief in applied sciences as a contemporary technique of responding to the emergence of latest types of safety threats, unlawful immigration patterns and epidemic dangers (Recital 29 of the ETIAS Regulation).

[31] ETIAS, “ETIAS will launch 6 months after EES rollout, official web site updates”, 2024, accessed January 26, 2025, https://etias.com.

[32] See Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing a European Journey Data and Authorisation System (ETIAS).

[33] EU-LISA, Synthetic intelligence within the operational administration of large-scale IT methods. Analysis and expertise monitoring report: views for eu-LISA (Brussels: EU Publications Workplace, 2024), 30.

[34] Niovi Vavoula, “Regulating AI at Europe’s border: the place the AI Act falls brief”, Verfassungsblog, December 13, 2024, accessed January 26, 2025, https://verfassungsblog.de/regulating-ai-at-europes-borders/.

[35] Yang et al., “Automated”, 20.

[36] Evelien Brouwer, “Schengen and the Administration of Exclusion: Authorized Treatments Caught in between Entry Bans, Threat Evaluation and Synthetic Intelligence European”, Journal of Migration and Regulation, v. 23 (2024): 485-507, doi: 10.1163/15718166-12340115.

[37] Nalinee Maleeyakul et. al, “Ethnic Profiling”, Lighthouse Reviews, 2023, accessed January 26, 2025, https://www.lighthousereports.com.

[38] P. Møhl, “Biometric applied sciences, knowledge and the sensory work of border management”, Ethnos, v. 87, no. 2 (2022): 241-256, doi: 10.1080/00141844.2019.1696858.

[39] See Regulation (EU) 2021/2303 of the European Parliament and of the Council of 15 December 2021 on the European Union Company for Asylum and repealing Regulation (EU) No 439/2010.

[40] Derya Ozkul, Automating Immigration and Asylum: The Makes use of of New Applied sciences in Migration and Asylum Governance in Europe (Oxford: Refugee Research Centre, College of Oxford, 2023), 15.

[41] Brouwer, “EU’s AI Act”.

[42] Ana Beduschi, “Worldwide migration administration within the age of synthetic intelligence”, Migration Research, v. 9, no. 3 (2020): 576-596, doi: 10.1093/migration/mnaa003.

[43] Vavoula, “Regulating”.


Image credit score: by Markus Spiske on pexels.com.

#technological #innovation #basic #rights #migrants #Act #Official #Weblog #UNIO

Leave a Comment

x