Studi Tributari Europei. Vol.14 (2024), I.25 – I.37
ISSN 2036-3583

The rights and guarantees of taxpayers as a limit on the use of artificial intelligence by tax authorities

Luis Alberto Malvárez PascualUniversity of Huelva (Spain)
Full Professor of Financial and Tax Law

Published: 2025-04-10

Los derechos y las garantías de los contribuyentes como límite al uso de la inteligencia artificial por las administraciones tributarias

In recent years, tax authorities have introduced tools based on artificial intelligence to carry out various administrative activities. Currently, there are no regulations governing the development and use of such technology by these authorities to ensure its proper use. Nevertheless, the incorporation of artificial intelligence into various administrative processes must be carried out within the current legal framework, meaning that all principles and values governing administrative actions must be respected. Furthermore, the fundamental rights and basic guarantees of taxpayers constitute an insurmountable limit. This paper analyzes the risks and threats that the use of this technology may pose in relation to such principles and rights.

En los últimos años, las administraciones tributarias han introducido herramientas basadas en inteligencia artificial para la realización de diferentes actividades administrativas. Actualmente, no existen normas que regulen el desarrollo y la utilización de esta tecnología por dichas administraciones para asegurar el uso adecuado de la misma. No obstante, la incorporación de la inteligencia artificial a los distintos procesos administrativos se debe realizar en el marco jurídico vigente, por lo que se han de cumplir todos los principios y valores que rigen la actuación administrativa. Además, los derechos fundamentales y las garantías básicas de los contribuyentes constituyen un límite infranqueable. En el presente trabajo se analizan los riesgos y amenazas que el uso de esta tecnología puede plantear en relación con tales principios y derechos.

Keywords: inteligencia artificial; procedimientos tributarios; principios rectores de la actuación administrativa; derechos y garantías de los contribuyentes.

1 Introduction

The use of artificial intelligence (AI) can improve the various administrative processes carried out by tax authorities. However, it is important to address the risks that the use of these technologies may entail. In this regard, ESEVERRI has predicted that the emergence of AI in tax procedures will lead to a reduction in the rights and guarantees of taxpayers. This paper will analyse to what extent the use of this technology can affect such fundamental rights.

The fact that AI may give rise to certain threats to the fundamental freedoms of taxpayers should not result in its use being banned. Tax authorities cannot do without this and other computer technologies for the development of various tasks that require massive data processing, as their use can greatly improve various administrative functions. However, it is necessary to establish measures that ensure a responsible, safe and ethical use of AI, with the protection of taxpayers’ rights being one of the fundamental aspects to achieve this objective. This vision has recently been shared by the Spanish tax administration itself, since in the document “AI Strategy of the Tax Agency (hereinafter, AEAT), dated May 27, 2024” , these principles are repeatedly referred to within the framework of the strategy proposed for the development and use of AI .

However, it must be kept in mind that there is currently no regulation that specifically regulates the development and use of this technology by tax administrations. The EU Regulation on AI does not affect all human activities equally and will have little effect in relation to the systems used by tax and customs administrations in their administrative procedures, insofar as they are excluded from the high-risk classification . This means that the strictest rules corresponding to this type of system will not be applied , but only certain transparency rules, which will not contribute in a relevant way to the improvement of taxpayers’ rights . There are also no national regulations that refer to the application of these technologies in the tax field.

It is essential to overcome this regulatory gap, for which the appropriate legal framework and preventive mechanisms must be established to promote a responsible and transparent development and use of these technologies in tax procedures. Priority must be given to an ethical use of AI, based on due respect for the rights and guarantees of taxpayers, as well as on the new rights that must be strengthened and consolidated for the protection of citizens in the digital age. The challenge for lawyers is to propose regulatory reforms that comply with the indicated principles.

In any case, in the absence of specific regulation, the application of these technologies is carried out within the framework of the existing legal system, which establishes the principles on which administrative actions must be based. In addition, these must respect, in any case, the rights and guarantees of the citizens affected by them. Next, it will be the subject of study how such principles and rights can operate as a limit to tax administrations that use tools based on AI and mass data processing. Taxpayers affected by these administrative actions enjoy all the rights that assist them in order to their legal defense, so that in their appeals against the administrative acts that affect them, they will be able to refer to all those issues that have been raised throughout those acts, including those related to the use of AI.

2 The principles that govern the actions of tax administrations as a guarantee of the proper use of AI

In the deployment and use of AI tools in tax procedures, tax administrations must comply with all the principles that inspire and order administrative action. GARCÍA-HERRERA points out that administrations that adopt big data and AI technologies must be governed by the principles of prudence, non-discrimination, proportionality, transparency or data governance. Space limitations prevent us from undertaking a comprehensive study of this issue, so below we will limit ourselves to analyzing in greater depth only some of the principles that are basic for these purposes, such as proportionality, good administration, transparency and legal certainty.

2.1 The principle of proportionality

The principle of proportionality is included among the principles that inform the administrative procedure, being basic in the organization of powers. It plays a role of enormous relevance, since it must guide the legislator in regulating the procedures for applying taxes and must guide in the use of the regulatory power. It is also recognized as a fundamental principle in the application of the tax system by article 3.2 of Law 58/2003, of December 17, General Taxation (hereinafter, LGT). In addition, it is a principle that is continually used by national or European jurisprudence as a means of controlling the activities of the legislator and the Administration. In this way, it is fully operational as a means of controlling the activity of tax administrations, which must conform as far as possible to the criteria that are deduced from it. This principle affects all actions and acts of application of the tax system, since it determines the way in which the relationship between power and fundamental rights must develop. Beyond a limit to the different powers, it reflects the way in which the application of taxes must be understood in a democratic society, and therefore must govern all decision-making processes within the framework of tax procedures. Any data processing must pass through the sieve of the principle of proportionality, which requires respect for the three criteria that comprise it: suitability, necessity and weighting of the measure or principle of proportionality in the strict sense.

.Firstly, this principle assumes that the means used in administrative actions are appropriate to the purposes that justify them. Although the objective pursued by them may be perfectly lawful, the means used to achieve it may not be, as they are not appropriate for that purpose. Therefore, it must be justified that the use of the technological tools used and the processing of data is appropriate to the purposes pursued. In addition, all those aspects that are essential within the framework of the different administrative procedures must be taken into account so that the suitability of the acts and administrative actions can be controlled. To this end, there are certain procedures that are absolutely essential, such as the submission of allegations by the interested party and/or the hearing procedure, which are regulated in letters l) and m) of article 34.1 of the LGT. However, the essential aspect for carrying out this suitability control is the motivation of the acts, since its absence or insufficiency determines the defenselessness of the taxpayers. Precisely, this is one of the problems that procedures that are resolved automatically by any technology based on algorithms have, since reality demonstrates the deficiencies in the motivation of decisional computing. The motivation must be individualized, taking into account the conditions of the specific case, so a document that incorporates stereotyped phrases that are repeated in all decisions does not exceed this requirement. The motivation of an AI system can hardly reach sufficient legal richness, particularly when it involves the exercise of discretionary powers or when the administrative decision depends on the legal interpretation of a normative text, since AI does not yet have the legal reasoning capacity of human beings. Therefore, in this type of situation, the Administration is exposed to many tax acts being annulled due to lack or incorrect motivation. However, in different documents the AEAT has indicated that it does not use AI systems to make automated decisions, so as long as this reality does not change, the problem indicated will not arise.

Secondly, the need for data processing using AI systems must be justified, since the principle of minimum intervention is an essential component of the principle of proportionality. It is a principle that has been recognised for years in administrative and tax regulations, as well as in the jurisprudence of the Supreme Court (hereinafter, TS), which elevated it to the category of general principle applicable to all public administrations. The principle requires that, when there is a choice between several appropriate measures to achieve the same objective, the least burdensome must be used, since the least onerous possible administrative actions must be sought. This principle is intended to prevent the situation of taxpayers from being unnecessarily aggravated. Therefore, if a less radical solution exists to achieve the same goal, that is, a measure that involves less restriction of individual rights and freedoms, it will be preferred. The measure adopted in contradiction with this rule violates the principle of proportionality, so those administrative actions that are, in relation to fundamental rights, more restrictive than other possible alternative solutions may be annulled, provided that these are of similar effectiveness. The most restrictive measures should only be adopted when the other options that are more respectful of the rights of taxpayers have failed. This principle is of great relevance when it comes to the establishment and fulfillment of formal duties or obligations, since it is necessary to limit the costs of compliance with this type of duties for taxpayers and that the actions of the tax administrations that require their intervention in such procedures are carried out in the least burdensome way possible. This principle requires that the burdens imposed on taxpayers are not disproportionate in relation to the advantages they bring to the public good. Therefore, the incorporation of new AI technologies into tax procedures cannot lead to greater burdens for taxpayers, but rather the opposite, to a reduction in the burden.

Finally, it will be necessary to take into account the principle of proportionality in the strict sense, according to the name of the Constitutional Court. This principle is aimed at avoiding that, as a consequence of administrative activity, greater damages arise than those intended to be avoided, by weighing up the benefits derived from the measure adopted and the restrictions of the rights to which it gives rise. In this sense, the LGT establishes various precepts that influence this idea. They are norms that refer to procedures that are especially burdensome in themselves, such as the procedure for forced execution, the adoption of precautionary measures or the suspension of administrative acts in the economic-administrative route, as suspension without guarantees is admitted in certain situations. In short, mere formal compliance with the norms is not enough, but it is necessary for the Administration to restrict the rights of taxpayers as little as possible in its ordinary way of acting.

In the area under analysis, the principle of proportionality determines, among other things, that unjustified and disproportionate interference in the personal sphere of citizens is not valid, for example, through the use of personal data that do not have a minimum tax significance or that are not related to the specific procedure for which they have been collected, which also refers to the principle of the relevance and tax significance of the same, which derives from the right to data protection.

If the tax authorities were to take all these principles and criteria into account, the quality of administrative interventions would improve, greater respect for the rights of taxpayers would be achieved and compliance costs for them would be limited and, of course, the objectives pursued would be achieved with the same effectiveness. However, we are still a long way from this happening.

2.2 The principle of good administration

Another guiding principle of administrative action that must take center stage in the process of incorporating AI into tax procedures is the principle of good administration. The relevance of this principle has been recognized by the most recent jurisprudence of the Supreme Court, as well as by the CJEU in numerous matters affecting the tax field. This principle is based on article 41 of the Charter of Fundamental Rights of the European Union and, at an internal level, can be derived from articles 9.3 and 103 of the Spanish Constitution and the general principles included in article 3.1.e) of Law 40/2015, of October 1, on the Legal Regime of the Public Sector. The Supreme Court has argued that the Administration is required to conduct itself diligently to avoid possible dysfunctions arising from its actions. Under this principle, the administrations must not only strictly comply with the procedural rules, observing all the procedures provided for therein, but must also ensure the full effectiveness of the guarantees and rights recognized to taxpayers in the Spanish Constitution, in the laws and in other development regulations. In order to truly and effectively guarantee the rights of taxpayers, the tax administrations are required to behave objectively and diligently, through a proactive attitude. Jurisprudence also highlights that this principle constitutes, according to the best doctrine, a new paradigm of 21st century law, referring to a mode of public action that excludes negligent management.

This principle gives rise to a series of rights for citizens, as well as a corresponding list of duties that public administrations must fulfil, which must act, in the exercise of their functions, in a fair, reasonable, equitable, impartial and transparent manner, and for the benefit of citizens, respecting their rights and interests, treating them all equally, without discrimination. This is a principle that allows the rights and guarantees of citizens to be strengthened in administrative procedures. It is related to the fundamental right to a fair administrative procedure, without disproportionate delays, as well as to other rights and guarantees that may derive from it, such as the principles of transparency, impartiality, access to justice and effective administrative protection, the right to information, to participation, to be received in a hearing, to obtain an administrative resolution within a reasonable period of time, to receive effective and equitable treatment of matters, to the Administration acting in good faith and to the motivation of administrative decisions.

In relation to the subject matter under study, relevant consequences can be drawn from this principle, although it is true that some of them can also be derived from other principles governing the tax procedure and from some of the rights of taxpayers that will be examined later. Thus, this principle requires transparency and publicity in the use of AI. It also implies the need for control of the appropriate use of this technology so that the results derived from it are correct, for which all the processes aimed at this end must be implemented. Thus, a protocol of action and a guide to good practices must be established, in which the principles and procedures to be followed to guarantee an adequate development of the systems and their good use by officials are indicated. For their part, internal controls or audits must be carried out, which guarantee compliance with such principles and procedures and, in particular, control of the biases and deviations that these tools may produce. It would also be advisable to periodically carry out external audits to ensure the proper functioning of the systems and to highlight any improvements that may be introduced, as well as to establish a quality seal for the algorithms. Another aspect that this principle affects is the obligation of the Administration to sufficiently motivate tax acts, since the citizen has the right to have decisions based on AI duly explained so that the subject can take the defensive actions that the system offers. In addition, it would also require avoiding unnecessary and undue delay in the recognition of the rights that are requested, so this principle could be invoked to correct those cases in which the delay in the resolution derives from an automated decision made based on an algorithm. Therefore, it is a guiding principle that should inspire administrative actions in tax procedures, but it is also a principle that can be invoked in the form of an appeal or claim when it is considered that it has been violated by an administrative action or resolution in any procedure for the application of taxes, which allows justice to be done when there is no flagrant violation of the regulations, which is what can happen due to the use of AI by tax administrations, given the existing deregulation.

2.3 The principle of transparency

Article 3.1.c) of Law 40/2015, of October 1, on the Legal Regime of the Public Sector, includes among the general principles that public administrations must respect the principle of “transparency of administrative action”. Law 19/2013, of December 9, on transparency, access to public information and good governance, also constantly refers to this principle. It is, therefore, a fundamental principle of the action of the public authorities that affects all public administrations.

The duty of transparency must be specified in various rights that correspond to the taxpayers affected by tax procedures in which AI-based systems are used. Without being exhaustive, we can cite some of these rights. First, the taxpayer’s right to know whether an AI system has been used in the procedure and the scope of such use must be recognized, since the appeal that the taxpayer files against an act may be based on some issue related to the tool used. Second, the right of citizens to know the computer applications used by tax administrations must be established, which must be specified

Thirdly, if an act has been adopted directly by the system, this circumstance must be specified in the automated decision that ends the procedure, which is already an obligation of the Administration in some countries around us . Although it has been suggested that this duty could be based on an extensive interpretation of letter ñ) of article 34.1 of the LGT , the truth is that it is difficult to force the administrations to comply with it, given the broad terms of this provision, so an express provision in this sense would be necessary for it to be applied in a mandatory manner by the tax administrations. In a later section, the new rights and guarantees for the protection of taxpayers affected by the use of AI by these administrations will be analysed in more detail, which will be redirected to the principle of transparency, examining the pillars on which said principle should be based for these purposes.

3 Traditional fundamental rights as a guarantee against the use of AI and mass data processing

Fundamental rights constitute a guarantee against the use of AI by tax administrations. In the absence of legal regulation that develops this matter, fundamental rights constitute an insurmountable limit to administrative action. It has been highlighted that, given the misgivings and distrust generated by the use of AI by tax administrations, they must be extremely careful in safeguarding taxpayers’ rights. It is true that these rights would be more effective against possible abuses if there were legislative development of this matter, which would allow the necessary adaptations to be made to guarantee the protection of taxpayers in this new technological context. Until this task is carried out, legal operators must be especially vigilant to ensure that tax administrations respect the essential content of such rights.

In particular, although they are not the only rights that may be affected, those that, in our opinion, may be most relevant for these purposes will be analyzed below, such as the right to personal and family privacy, the right to non-discrimination or equality in the application of the law and the right to the protection of personal data. Without being exhaustive, we will try to analyse some of the aspects that may affect the rights mentioned and that could even lead to their violation. As the contentious issues arise and the specific cases that arise are resolved by the courts, the effectiveness of these rights in eradicating inappropriate behaviour by the tax authorities can be verified.

3.1 The right to personal and family privacy

The right to personal and family privacy may be violated as a result of the processing of data that, although they may have an indirect patrimonial content, refer to the personal sphere of the obliged party. This situation may occur, fundamentally, when data is collected from open sources indiscriminately, without control of the tax significance of the same for a specific investigation already initiated. The data collection phase cannot be prospective, as this would lead the tax authorities to obtain all kinds of data in order to subsequently select the most relevant data in the processing phase. This situation may arise, for example, in an attempt to prove a person’s tax residence in Spain, as a large amount of information can be obtained from publications made on social networks, such that certain information can be handled that is not of tax interest and that may affect the private life of the person under investigation.

3.2 The right to non-discrimination and equality in the application of the law

The use of AI by tax authorities could reinforce equality, helping to make tax management fairer, since administrative decisions can be more objective, as they depend less on the subjectivity of officials, to the extent that this technology allows the system to respond identically to the same factual situation. In addition, greater effectiveness and efficiency can be achieved in the use of public resources. However, applications are also imperfect and can lead to inadequate results, which generates risks that endanger equality, as they can lead to biases that result in discriminatory situations, which will occur when the use of these systems can lead to harmful treatment of a person or group of people, without justification, which would violate the traditional principle of non-discrimination. In any case, the threat is not the technology, but the misuse of it.

Numerous authors have agreed that the right that may be most seriously affected by the use of AI is the right to non-discrimination and, more specifically, the right to equality in the application of the law. Although biases are inherent to any human activity, the use of big data and AI applications can increase these types of situations, because they carry out massive data processing, such that the number of people who may be affected by these applications is very high. To prevent these cases from occurring, variables that give rise to discriminatory biases must not be introduced, since the model must be neutral in relation to certain characteristics of the human being, such as sex, race, religion, political ideology, age, income level, social class, nationality, etc. When factors of this type are used, despite their legal irrelevance for the resolution of the procedure, unjustified biases may occur in the decisions that are adopted, violating the principles of equality and non-discrimination. The same consequence would occur if the opening of the verification procedures had its origin in some of the parameters stated, so guarantees must be introduced to eliminate biases in the data entered into AI systems, even if this means less effectiveness in the pursuit of tax fraud.

On the other hand, biases can have their origin in different technical aspects necessary for the operation of the systems. Firstly, the discriminatory situation can have its origin in the algorithms used by the applications, which has led to the enunciation of a principle of “non-algorithmic discrimination”. Secondly, the violation of this right can also have its origin in the data that has been used to train the system, fundamentally when the cases that have been used for this purpose have been generated by AI, since in this case the erroneous results derived from the tool can be amplified. In particular, these discriminatory biases can occur in the systems used to select taxpayers who are to be subject to a tax control procedure, which will lead to certain types of operations, sectors, persons or groups being investigated on a recurring basis, without there being a reason for this.

3.3 The right to the protection of personal data

The right to the protection of personal data is one of the most relevant rights involved in the use of AI by tax administrations, being a condition for the protection of other fundamental rights. In fact, it has been highlighted that currently the only existing legal guarantees in this matter are those established in the regulations on data protection. These regulations are Organic Law 3/2018, of December 5, on the Protection of Personal Data and guarantee of digital rights, as well as the Regulation of the European Parliament and Council of April 27, 2016 (2016/679). However, it must be taken into account that these guarantees cover only part of the problem raised, since it is one of the many issues involved in the use of AI by administrations. If personal data is processed through this type of tool, the principles established by these regulations must be respected. However, it must be kept in mind that the 14th additional provision of Organic Law 3/2018 allows the survival of the regulations that establish exceptions and limitations in the exercise of rights that were in force before the GDPR. This means that articles 23 and 24 of Organic Law 15/1999, of December 13, on the Protection of Personal Data, remain in force.

Personal Data, as long as they are not expressly modified, replaced or repealed. These rules allow tax authorities to deny the exercise of the rights recognized in articles 15 to 22 of the GDPR, in the event that administrative actions aimed at ensuring compliance with tax obligations are hindered and, in any case, when the affected party is being subject to inspection actions. Therefore, the analysis of the implications of AI on this right must be carried out taking this starting point into account.

For the purposes of personal data protection, the principles set out in the aforementioned regulations must be complied with, such as transparency, relevance, quality, veracity and control of the data that is collected. The principle of transparency requires the application of the rules contained in articles 12 to 14 of the GDPR, and, in addition, that a Register of processing activities be drawn up, in accordance with article 30 of the GDPR. On the other hand, the regulations establish that personal data may only be used for a purpose that has been previously determined, which requires that the data be relevant for that purpose, such that data that is not necessary may not be processed, that is, that does not have tax significance in the specific procedure for which it is collected. Since AI tools allow for massive data processing, there is a risk that a much larger amount of data than is needed will be used, when the processing must be done with the smallest amount of personal data that allows the purpose of the same to be fulfilled. Furthermore, there must be control over the storage of this information with tax implications, since it can only be kept and processed for the specific procedure for which it was obtained and for a limited period of time, so it should be destroyed once the resolution that ends it has become final. In short, data cannot be kept indefinitely in a prospective manner, in case it could be useful in a future investigation into the same taxpayer or another person related to him. All these principles can be jeopardized by the indiscriminate collection of data from a taxpayer, particularly when it is done from open sources, since it is easy for personal data without tax implications and that have no direct relationship with the procedure for which it was collected to be compromised. In addition, the extraction of data from this type of sources, particularly from social networks, jeopardizes the accuracy and veracity of the data, since the data is not verified, so its processing can lead to incorrect results. With regard to information control, tax authorities must have in place all the technical and organisational measures to ensure the security of the data stored or processed, preventing unauthorised communication or access to it. In short, all mechanisms that allow for adequate governance of this data must be implemented. These measures must be reviewed periodically, taking into account the state of the technology, the nature of the data and the risks to which they are exposed.

Furthermore, it is very important to determine whether the provisions of article 22.1 of the GDPR, which establishes a right not to be subject to fully automated decisions, including profiling, are applicable for these purposes. However, section 2 of the same provision allows this type of decision in certain cases. Thus, letter b) allows it when the decision “is authorised by Union or Member State law”. Therefore, when it is expressly provided for by national regulations, as is the case in the LGT, the taxpayer may not object to receiving a fully automated decision as a way of ending the procedures. We agree with PÉREZ BERNABEU that taxpayers may not invoke the aforementioned right when it is a semi-automated decision, that is, when it derives from a decision-making support tool. Finally, another relevant aspect is the possibility for tax authorities to establish risk profiles in an automated manner for the purposes of classifying or segmenting taxpayers, which may give rise to significant consequences for their legal situation. These taxpayer segmentation activities can be developed both in the area of ​​tax control, so that inspections focus on taxpayers who have a high risk of having defrauded, and in the area of ​​collection, since debtors could be classified according to the risk of late payment, so that, for example, those who do not comply with payment obligations on time could have more difficulties in accessing a deferral or installment payment. It must be analyzed whether the right of citizens in relation to the creation of profiles established in article 22.1 of the GDPR is applicable in the tax area. These types of tools are not provided for in the regulations, so the conditions of the exception provided for in article 22.2.b) of the GDPR do not apply, although articles 23 and 24 of Organic Law 15/1999 allow tax administrations to deny the rights derived from this article 22.1 of the GDPR when they hinder administrative actions aimed at ensuring compliance with tax obligations and, in particular, in the event that inspection actions are carried out.

4 New rights and guarantees for the protection of taxpayers affected by the use of AI by tax authorities. Their reorientation to the principle of transparency and the duty to provide reasons

The doctrine has stated that, due to the risks involved in the use of these new information processing technologies, the traditional rights analysed in the previous section are not sufficient for adequate protection of taxpayers. In this regard, it is considered necessary to introduce new specific guarantees when AI-based tools and mass data processing technologies are used in procedures. In reality, many of these guarantees have to do with the application of a principle of transparency, which must operate both in the transmission of information to society and to the taxpayer affected by a procedure in which these tools are used. At present, only some transparency rules are provided for in relation to applications that give rise to automated decisions, without requiring the publication of many details about their operation. It is necessary to deepen and improve the duties of information to citizens, and these guarantees should be extended to the tools supporting management and decision-making, for which basic information on the operation of the AI ​​systems used by tax administrations will have to be offered. In our opinion, the principle of transparency, in relation to AI-based tools, must be based on three pillars.

The first is that tax administrations provide information on the tools they use for mass data processing, which should include a brief description of their technical elements. This is the opposite of what happens currently, since only those applications that give rise to automated decisions are published. Following the repeal by Law 11/2007 of article 45.4 of Law 30/1992, there is no legal basis for obliging administrations to publish data on the computer tools they use. No subsequent law, including Law 19/2013 of 9 December on transparency, access to public information and good governance, has introduced a duty to disclose the technical characteristics of the programmes. The name and main technical characteristics of all the tools used should be required to be published in a public register, provided that this does not compromise the success of the procedure in which they are applied. To this end, reasonable transparency is proposed when publishing the computer tools used by tax administrations, both those that give rise to automated administrative resolutions and those used as decision-making support systems, which would represent a very significant change in relation to the latter type of tools.

The second pillar is the need to introduce a duty to inform taxpayers affected by a procedure in which AI-based tools are used. This should be the case when it is an automated decision, as it should be expressly indicated in the resolution that said act has been carried out entirely by means of a computer application, although at present there is no legal obligation to communicate this information. However, it should also be reported that the start of a verification procedure has its origin in a predictive AI system, as this duty of transparency should also affect decision-making support systems.

The third pillar is the establishment of a duty to publish certain technical aspects of the tools used. In particular, the possibility of establishing a principle of “algorithmic explainability” has been raised. Under this principle, it is rejected that the decisions of the tax authorities can be adopted by means of a completely opaque mathematical formula. However, the application of this principle raises many doubts and, in fact, there are different sensitivities in the doctrine when it comes to determining its scope and how it should be interpreted or understood. Part of the doctrine has considered that a right is derived from it that allows citizens to have access to certain aspects of the architecture of the programs used and, in particular, to the source code or to the algorithms that determine the results of the system and, even, it could also reach the training data that have been used. Other authors consider that knowledge of the decisional architecture of these tools should only be given if the algorithms automatically decide the procedures, but not when these tools are used to support the decision-making of officials. In any case, what unites these authors is that, in a broader or more limited way, they consider that the public dissemination of the algorithms and other technical elements that determine the functioning of the tools is necessary. However, another part of the doctrine has considered that the principle of “algorithmic explainability” could be redirected to the right to obtain a reasoned decision, since the resolution of any procedure requires an individualized response that incorporates legal reasoning that is understandable to the recipients. This is how we also understand this principle for the following reasons:

First, because public knowledge of the architecture of the tools and, in particular, of the algorithms that have been used, does not imply an improvement in the rights and guarantees of the taxpayers affected by the tax procedures in which they are used. Among other reasons, because this information will not be useful for most taxpayers, since almost no taxpayer will be able to understand it. It makes no sense to impose on the data controller the obligation to offer a complex mathematical explanation of how the algorithms or, in general, the tools work. This obligation must be specified in a duty to offer clear information in natural language on the essential aspects that have motivated the resolution that has been communicated to you, which is related to the duty to motivate administrative acts.

Second, because the public dissemination of all data relating to the programmes could undermine the objectives of the tax authorities. This is particularly the case in relation to tools aimed at combating tax fraud, since the information that must be provided on these tools must not compromise the investigative work of these administrations, as has been highlighted by the Council for Transparency and Good Government . Furthermore, in relation to applications relating to the prevention of tax risks, it must be taken into account that article 170.7 of the RGIT determines the confidential nature of all systems for selecting taxpayers who are to be subject to a tax inspection, which also covers those based on new technological means for processing information. For its part, doctrine has also pointed out that full publicity of these systems may undermine the effectiveness of the actions of the tax authorities . The publication of algorithms in this type of application could harm the effectiveness of investigations, since public knowledge of the mathematical formulas that determine the results derived from them would allow the creation of other programs that establish the “consented” levels of fraud, which would enable taxpayers to manipulate their accounting and tax data so as not to incur in the pre-established risk profiles. As indicated, a comprehensible explanation in natural language of the elements that have determined that a taxpayer incurs in a risk profile and has been selected to be the subject of a tax control procedure is more convenient than the dissemination of the algorithms that have determined this conclusion. In fact, the data that have determined the opening of the procedure must be incorporated into the file, in addition to being included in the motivation of the inspection report. In this way, the taxpayer will be able to know them and oppose them in a contradictory procedure, since in the appeal that he files against the liquidation derived from the procedure, he will be able to discuss the validity and reality of the data on which the initiation of the same was based. In relation to the procedure to be followed for the use of the tools for the selection of the taxpayers to be inspected, we have proposed that a prior investigation procedure be included in the legislation at the beginning of the inspection procedure, to avoid these activities being de facto actions lacking regulation, as is currently the case. After monitoring these prior actions, there would be more arguments to determine the appropriateness or not of opening the inspection procedure, when the real data obtained in such actions confirm the indications of fraud resulting from the predictive AI tools, avoiding inconvenience to taxpayers when said procedure is not necessary, which would allow for greater effectiveness and efficiency of administrative actions.

Third, when emphasis is placed on the need to publish the algorithms used by the system and to provide technical details of its operation, the emphasis is being placed on an aspect that, in my opinion, is not the most important in this matter. It is as if any act adopted by an official was intended to know how the brain of the person who makes this decision works. In effect, knowledge of the mathematical operations that have led to obtaining a result cannot be more relevant than the decision that has been adopted, since the focus must be on the correct motivation of said act from the factual and legal point of view, to the extent that it may be subject to appeal. It is true that in some cases the decision-making process could be relevant for legal purposes, particularly when certain discriminations arise that derive from biases in the tools. This could occur, among other cases, when a system that resolves the procedures systematically denies certain requests without any justification or if a taxpayer is regularly affected by the opening of verification procedures without justification. Even if these decisions may be legally justified, a violation of the principle of equality before the law could be predicted if it is justified that the decision-making is based on biases in the application that has been used. However, it is not easy to determine the biased nature of a decision, because in order to know the reasons why the system has adopted the same decision, the training cases and the algorithms that have been programmed would have to be audited, and even then it is not easy to locate the biases. In addition, the explanation of the decision will sometimes be very complicated, not only because of the interest of the AEAT to maintain the utmost confidentiality regarding the models it uses, but because in certain advanced types of AI, particularly in machine learning or deep learning systems, it can be practically impossible to understand and explain why a specific result has been reached (black boxes). Therefore, in machine learning systems it is difficult to clarify the decision-making process, so the only viable possibility is to control the results derived from these tools, because, otherwise, these systems could not be used by tax authorities.

Fourth, because at present there is not enough legal basis to defend that taxpayers have a right to know the algorithms or source codes of the tools used in the procedures that may have affected them. Although somewhat forced explanations can be put forward to justify the need for this publicity, the only legal basis could be various rules of the GDPR, although none of them explicitly establish that the obligation of publicity of the data controller extends to the technical aspects indicated. This is also the official thesis, as stated in the Guidelines on automated individual decisions and profiling for the purposes of Regulation 2016/679, prepared by the Working Party on data protection of article 29 of the GDPR, whose position coincides with the thesis that we defend in this work . This position cannot be based on Spanish legislation either, particularly after the repeal of article 45.4 of Law 30/1992, referred to above. Although it is true that there may be doubts as to whether administrations must publish information on this data under Law 19/2013, of December 9, on transparency, access to public information and good governance , there is no express recognition of this right in said Law, so its practical application is forced, particularly when this rule is to be applied to decision-making support tools. Furthermore, this right could clash with the limitations introduced by article 14 of the same Law for the purposes of developing the administrative functions of inspection and control, when it comes to tools used by the tax inspection, as well as with article 170.7 of the RGGIT, which establishes the reserved nature of all systems for selecting taxpayers who are to be the subject of inspection actions, including computer means of information processing. In short, an express regulatory change would be necessary for this thesis to be accepted, although it has already been indicated that public knowledge of the technical aspects of the operation of AI-based tools would not, in general, be very effective in defending the rights of taxpayers.

Before demanding such in-depth technical knowledge of the tools, which contributes little to taxpayers’ guarantees, radical changes should be demanded to overcome the current situation of absolute opacity, in which it is not known which tools are used, what technology they are based on, what data is captured to feed these applications or whether these are kept or destroyed. The conclusion is that the right to “algorithmic explainability” must be redirected, in general, to the right to motivation of administrative acts, since the decisions that derive from these tools must be explained or motivated, like any other decision of the Administration. In this way, the AI-based system must be able to analyze and explain the factual elements of the case in question and reach an appropriate legal solution that must be properly motivated so that the recipient can exercise their rights of defense. In short, what is relevant is that all the mathematical operations of AI systems are reduced to a comprehensible and well-founded text. Only if the system manages to meet these objectives can AI be accepted as a valid mechanism for resolving procedures. As long as AI does not allow the cases it resolves to be adequately substantiated legally, the decision-making process in tax procedures cannot be automated. These tools can only be used as an aid or support for decision-making by officials, who must compare the results derived from them with the factual elements obtained from reality. In fact, the AEAT itself recognises that currently no AI tool automatically resolves any procedure, as indicated above. Therefore, the emphasis should not be placed on the cognitive, mathematical and computer process that leads to the adoption of a specific act, so the publication of the source code of the programs and algorithms used is not necessary. The lack of public knowledge of these aspects does not generate defencelessness, in general, nor does it violate the right to effective judicial protection, apart from the fact that changes in the transparency of said process must also be demanded. In short, we must focus on the singular acts that derive from AI-based tools and on the adequacy of the factual and legal basis of the decisions adopted, since what violates this right is the non-existence or insufficiency of the motivation, so this classic right must be updated and expanded to adapt to these new situations.

5 Conclusions

Neither the European nor the Spanish regulations establish specific rules that regulate the use of AI by tax administrations. The EU Regulation excludes applications used by said administrations from the high-risk classification, so no special precautions are established in relation to them, so that the impact of this regulation in this area will be minimal. For its part, national tax regulations do not refer to AI, although the tools based on them must comply with the rules that regulate the use of computer applications, in addition to respecting the general legal framework, in which there is already some reference to the use of AI by public administrations.

Therefore, although there are no regulations that refer to this matter specifically, taxpayers are not left to their own devices, since the use of AI must be carried out within the existing legal framework and, in particular, the different principles that govern administrative action must be kept in mind, such as proportionality, good administration, transparency and legal certainty, whose application in this area has been the subject of study in this work. In addition, taxpayers are holders of numerous rights and guarantees, which apply at any stage of the tax procedure. In this sense, a central part of this article has consisted of the analysis of the risks and threats posed by the use of AI and the massive processing of data by tax administrations in relation to some of the most relevant fundamental rights, such as the rights to personal and family privacy, to non-discrimination and equality in the application of the law or to the protection of personal data. All these rights apply in relation to the use of new information technology, regardless of whether the act has been issued automatically through them or whether they have been used to support the decision-making of officials.

Taxpayers are also entitled to other new generation rights that are beginning to be applied in the digital field, such as the right to “algorithmic explainability”. However, we have concluded that this right can be traced back to the principle of transparency and the need for motivation in natural language for administrative acts, without requiring tax administrations to fully publicise the architecture of AI-based tools, so they do not have to publish the source code and the algorithms used by the programmes, although it is also necessary to overcome the current situation of opacity. To this end, the elements in relation to which greater transparency should be required have been pointed out.

In any case, if our study leaves any evidence, it is that it is necessary to establish a regulatory framework that ensures the proper use of these new tools in the actions carried out by the tax administrations. The legal regime that regulates the use of these applications is yet to be built, as it is essential to strengthen control and limits on the development and use of these applications, as well as to achieve greater transparency in the use of this technology, particularly in relation to certain aspects pointed out in this work. And most importantly, the rights of citizens who may be affected by the use of AI must be guaranteed, for which a comprehensive legislative reform will be necessary that addresses specific regulation of this matter. It is up to the jurists to demand that the regulations evolve in order to establish the mechanisms that allow for better protection of the rights and guarantees of taxpayers who may be affected by the use of these applications by the tax administrations.