1 Introduction
Learning representations that are useful for downstream tasks yet robust against arbitrary nuisance factors is a challenging problem. Automated systems powered by machine learning techniques are corner stones for decision support systems such as granting loans, advertising, and medical diagnostics. Deep neural networks learn powerful representations that encapsulate the extracted variations in the data. Since these networks learn from historical data, they are prone to represent the past biases and the learnt representations might contain information that were not intended to be released. This has raised various concerns regarding fairness, bias and discrimination in statistical inference algorithms
[16]. The European union has recently released their ”Ethics guidelines for trustworthy AI” report ^{1}^{1}1Ethics guidelines for trustworthy AI, https://ec.europa.eu/digitalsinglemarket/en/news/ethicsguidelinestrustworthyai where it is stated that unfairness and biases must be avoided.Since a few years, the community has been investigating to learn a latent representation that well describes a target observed variable (e.g. Annual salary) while being robust against a sensitive attribute (e.g. Gender or race). This nuisance could be independent from the target task which is termed as a domain adaptation problem. One example is the identification of faces regardless of the illumination conditions . In the other case termed fair representation learning and are not independent. This could be the case with being the credit risk of a person while
is age or gender. Such relation between these variables could be due to past biases that are inherently in the data. This independence is assumed to hold when building fair classification models. Although this assumption is overoptimistic as these factors are probably not independent, we wish to find a representation
that is independent from which justifies the usage of such a prior belief [17]. This is mostly approached by approximations of mutual information scores between and and force the two variables to minimize this score either in an adversarial [21, 15] or nonadversarial [13, 17] manner. These methods while performing well on various datasets, are still limited by either convergence instability problems in case of adversarial solutions or hindered performance compared to the adversarial counterpart. Learning disentangled representations has been proven to be beneficial to learning fairer representations compared to general purpose representations [12]. We use this concept to disentangle the components of the learned representations. Moreover, we treat the and as separate independent generative factors and decompose the learned representation in such a way that each representation holds information related to the respective generative factor. This is achieved by enforcing orthogonality between the representations as a relaxation for the independence constraint. We hypothesize that decomposing the latent code into target code and residual sensitive code would be beneficial for limiting the leakage of sensitive information into by redirecting it to while keeping it informative about some target task that we are interested in.We propose a framework for learning invariant fair representations by decomposing learned representations into target and residual/sensitive representations. We impose disentanglement on the components of each code and impose orthogonality constraint on the two learned representations as a proxy for independence. The learned target representation is explicitly enforced to be agnostic to sensitive information by maximizing the entropy of sensitive information in .
Our contributions are threefolds:

Decomposition of target and sensitive data into two orthogonal representations to promote better mitigation of sensitive information leakage.

Promote disentanglement property to split hidden generative factors of each learned code.

Enforce the target representation to be agnostic of sensitive information by maximizing the entropy.
2 Related work
Learning fair and invariant representations has a long history. Earlier strategies involved changing the examples to ensure fair representation of the all groups. This relies on the assumption that equalized opportunities in the training test would generalize to the test set. Such techniques are referred to as data massaging techniques [8, 18]. These approaches may suffer of underutilization of data or complications on the logistics of data collection. Later, Zemel et al. [22]
have proposed a semisupervised fair clustering technique to learn a representation space where data points are clustered such that each cluster contains similar proportions of the protected groups. One drawback is that the clustering constraint limits the power of a distributed representation. To solve this, Louizos
et al. [13]have presented the Variational Fair Autoencoder (VFAE) where a model is trained to learn a representation that is informative enough yet invariant to some nuisance variables. This invariance is approached through Maximum Mean Discrepancy (MMD) penalty. The learned sensitiveinformationfree representation could be later used for any subsequent processing such as classification of a target task. After the success of Generative Adversarial Networks (GANs)
[6], multiple approaches have leveraged this learning paradigm to produce robust invariant representations [21, 23, 4, 15]. The problem setup in these approaches is a minimax game between an encoder that learns a representation for a target task and an adversary that extracts sensitive information from the learned representation. In this case, the encoder minimizes the negative loglikelihood of the adversary while the adversary is forced to extract sensitive information alternatively. While methods relying on adversarial zerossum game of negative logliklihood minimization and maximization perform well in the literature, they sometimes suffer from convergence problems and require additional regularization terms to stabilize the training. To overcome these problems, Xie et al. [20] posed the problem as an adversarial nonzero sum game where the encoder and discriminator have competing objectives that optimize for different metrics. This is achieved by adding an entropy loss that forces the discriminator to be uninformed about sensitive information. It is worth noting that it is argued by [17] that adversarial training for fairness and invariance is unnecessary and sometimes leads to counter productive results. Hence, they have approximated the mutual information between the latent representation and sensitive information using a variational upper bound. Lastly, Creager et al. [2] have proposed a fair representation learning model by disentanglement, their model has the advantage of flexibly changing sensitive information at test time and combine multiple sensitive attributes to achieve subgroup fairness.3 Methodology
let be the dataset of individuals from all groups and be an input sample. Each input is associated with a target attribute with classes, and a sensitive attribute with classes. Our goal is to learn an encoder that maps input to two lowdimensional representations , . Ideally must contain information regarding target attribute while mitigating leakage about the sensitive attribute and contains residual information that is related to the sensitive attribute.
3.1 Fairness definition
One of the common definition of fairness that has been proposed in the literature [21, 20, 19, 1]
is simply requiring the sensitive information to be statistically independent from the target. Mathematically, the prediction of a classifier
must be independent from the sensitive information,i.e. . For example, in the German credit dataset, we need to predict the credit behaviour of the bank account holder regardless the sensitive information, such as gender, age …etc. In other words, should be equal to . The main objective is to learn fair data representations that are i) informative enough for the downstream task, and ii) independent from the sensitive information.3.2 Problem Formulation
To promote the independence of the generative factors, i.e. target and sensitive information, we aim to maximize the log likelihood of the conditional distribution , where
(1) 
To enforce our aforementioned conditions, we let our model encode the observed input data into target and residual representations,
(2) 
and maxmimize the log likelihood given the following constraints; (i) is statistically independent from , and (ii) is agnostic to sensitive information . Our objective function can be written as
(3) 
where
is the uniform distribution.
3.3 Fairness by Learning Orthogonal and Disentangled Representations
As depicted in Fig. 1, our observed data is fed to a shared encoder , then projected into two subspaces producing our target, and residual (sensitive) representations using the encoders; , and , respectively, where is shared parameter, i.e. . Each representation is fed to the corresponding discriminator; target discriminator, , and sensitive discriminator . Both discriminators and encoders are trained in supervised fashion to minimize the following loss,
(4)  
(5) 
where .
To ensure that our target representation does not encode any leakage of the sensitive information, we follow Roy et al. [20] in maximizing the entropy of the sensitive discriminator given the target representation as
(6) 
We relax the independence assumption by enforcing i) disentanglement property, and ii) the orthogonality of the corresponding representations.
To promote the (i) disentanglement property on the target representation, we first need to estimate the distribution and enforce some sort of independence among the latent factors,
(7) 
Since is intractable, we employ the Variational Inference, thanks to the reparamterization trick [10], and let our model output the distribution parameters; , and , and minimize the KLdivergence between posterior and prior distributions as
(8) 
where , and . Similarly, we enforce the same constraints on the residual (sensitive) representation and minimize the KLdivergence as .
To enforce the (ii) orthogonality between the target and residual (sensitive) representations,i.e. , we hard code the means of the prior distributions to orthogonal means. In this way, we implicitly enforce the weight parameters to project the representations into orthogonal subspaces. To illustrate this in 2dimensional space, we set the prior distributions to , and (cf. Fig. 1).
To summarize, an additional loss term is introduced to the objective function promoting both Orthogonality and Disentanglement properties, denoted OrthogonalDisentangled loss,
(9) 
A variant of this loss without the property of orthogonality, denoted Disentangled loss, is also introduced for the purpose of ablative study (See Sec. 4.3).
3.4 Overall objective function
To summarize, our overall objective function is
(10) 
where , and are hyperparameters to weigh the Entropy loss and the OrthogonalDisentangled loss, respectively. A sensitivity analysis on the hyperparameters is presented in Sec. 4.5.
4 Experiments
In this section, the performance of the learned representations using our method will be evaluated and compared against various state of the art methods in the domain. First, we present the experimental setup by describing the five datasets used for validation, the model implementation details for each dataset, and design of the experiments. We then compare the model performance with state of the art fair representation models on five datasets. We perform an ablative study to monitor the effect of each added component on the overall performance. Lastly, we perform a sensitivity analysis to study the effect of hyperparameters on the training.
4.1 Experimental Setup
datasets:
For evaluating fair classification, we use two datasets from the UCI repository [3], namely, the German and the Adult datasets. The German credit dataset consists of 1000 samples each with 20 attributes, and the target task is to classify a bank account holder having good or bad credit risk. The sensitive attribute is the gender of the bank account holder. The adult dataset contains 45,222 samples each with 14 attributes. The target task is a binary classification of annual income being more or less than and again gender is the sensitive attribute.
To examine the model learned invariance on visual data, we have used the application of illumination invariant face classification. Ideally, we want the representation to contain information about the subject’s identity without holding information regarding illumination direction. For this purpose, the extended YaleB dataset is used [5]. The dataset contains the face images of 38 subjects under five different light source direction conditions (upper right, lower right, lower left, upper left, and front). The target task is the identification of the subject while the light source condition is considered the sensitive attribute.
Following Roy et al. [20], we have created a binary target task from CIFAR10 dataset [11]. The original dataset contains 10 classes we refer to as fine classes, we divide the 10 classes into two categories living and nonliving classes and refer to this split as coarse classes. It is expected that living objects have common visual proprieties that differ from nonliving ones. The target task is the classification of the coarse classes while not revealing information about the fine classes. With a similar concept, we divide the 100 fine classes of CIFAR100 dataset into 20 coarse classes that cluster similar concepts into one category. For example, the coarse class ’aquatic mammals’ contains the fine classes ’beaver’, ’dolphin’, ’otter’, ’seal’, and ’whale’. For the full details of the split, the reader is referred to [20] or the supplementary materials of this manuscript. The target task for CIFAR100 is the classification of the coarse classes while mitigating information leakage regarding the sensitive fine classes.
Implementation details:
For the Adult and German datasets, we follow the setup appeared in [20]
by having a 1hiddenlayer neural network as encoder, the discriminator has two hidden layer and the target predictor is a logistic regression layer. Each hidden layer contains
units. The size of the representation is . The learning rate for all components is and weight decay is .For the Extended YaleB dataset, we use an experimental setup similar to Xie et al. [21] and Louizos et al. [13] by using the same train/test split strategy. We used samples for training and for testing. The model setup is similar to [21, 20], the encoder consisted of one layer, target predictor is one linear layer and the discriminator is neural network with two hidden layers each contains 100 units. The parameters are trained using Adam optimizer with a learning rate of and weight decay of .
Similar to [20], we employed ResNet18 [7]
architecture for training the encoder on the two CIFAR datasets. For the discriminator and target classifiers, we employed a neural network with two hidden layers (256 and 128 neurons). For the encoder, we set the learning rate to
and weight decay to . For the target and discriminator networks, the learning rate and weight decay were set to and ,respectively. Adam optimizer [9] is used in all experiments.Experiments design:
We address two questions in the experiments. First, is how much information about the sensitive attributes is retained in the learned representation ?. Ideally, would not contain any sensitive attribute information. This is evaluated by training a classifier with the same architecture as the discriminator network on sensitive attributes classification task. The closer the accuracy to a naive majority label predictor, the better the model is. This classifier is trained with as input after the encoder, target, and discriminator had been trained and freezed. Second, is how well the learned representation performs in identifying target attributes?. To this end, we train a classifier similar to the target on the learned representation to detect the target attributes. We also visualize the representations and by using their tSNE projections to show how the learned representations describe target attributes while being agnostic to the sensitive information.
4.2 Comparison with state of the art
CIFAR10  CIFAR100  
Target Acc.  Sensitive Acc.  Target Acc.  Sensitive Acc.  
Baseline  0.9775  0.2344  0.7199  0.3069 
Xie et al. [21] (tradeoff #1)  0.9752  0.2083  0.7132  0.1543 
Roy et al. [20] (tradeoff #1)  0.9778  0.2344  0.7117  0.1688 
Xie et al. [21] (tradeoff #2)  0.9735  0.2064  0.7040  0.1484 
Roy et al. [20] (tradeoff #2)  0.9679  0.2114  0.7050  0.1643 
Ours  0.9725  0.1907  0.7074  0.1447 
We compare the proposed approach against various state of the art methods on the five presented datasets. We first train the model with Algorithm 1 while changing hyperparameters between runs.We choose the best performing model in terms of the tradeoff between target and sensitive classification accuracy based on . We then compare it with various state of the art methods for sensitive information leakage and retaining target information.
CIFAR datasets:
We compare the proposed approach with two other state of the art methods on the CIFAR10 and CIFAR100 datasets, namely Xie et al. [21] and Roy et al. [20]. We examine two different tradeoff points of both approaches. The first tradeoff point is the one with best target accuracy reported by the model while the second tradeoff point is the one with the target accuracy closest to ours for a more fair comparison. The lower the target accuracy in the tradeoff the better (lower) the sensitive accuracy is. We can see when the target accuracies are comparable, our model performs better in preventing sensitive information leakage to the representation . Hence, the proposed method has a better tradeoff on the target and sensitive accuracy for both CIFAR10 and CIFAR100 datasets. However, the peak target performance is comparable but lower than the peak target performance of the studied methods.
Extended YaleB dataset:
For the illumination invariant classification task on the extended YaleB dataset, the proposed method is compared with the logistic regression baseline (LR), variational fair autoencoder VFAE [13], Xie et al. [21] and Roy et al. [20]. The results are shown in Fig. 2 on the right hand side. The proposed model performs best on the target attribute classification while having the closest performance to the majority classification line (dashed line in Fig. 2). The majority line is the trivial baseline of predicting the majority label. The closer the sensitive accuracy to the majority line the better the model is in hiding sensitive information from . This means the learned representation is powerful at identifying subject in the images regardless of illumination conditions. To assess this visually, refer to sec. 4.4 for qualitative analysis.
Tabular datasets:
On the Adult and German datasets, we compare with LFR [22], vanilla VAE [10], variational fair autoencoder [13], Xie et al. [21] and Roy et al. [20]. The results of these comparisons are shown in Fig. 2. On the German dataset, we observe a very good performance in hiding sensitive information with accuracy compared to in [20]. On the target task, the model performs well compared to other models except for [20] which does marginally better than the rest. On the Adult dataset, our proposed model performs better than the aforementioned models on the target task while leaking slightly more information compared to other methods and the majority line at . Our method has sensitive accuracy while LFR, VAE, vFAE, Xie et al. , and Roy et al. have , , , , and sensitive accuracy, respectively.
Generally, we observe that the proposed model performs well on all datasets with state of the art performance on visual datasets (CIFAR10, CIFAR100, YaleB). This suggests that such a model could lead to more fair/invariant representation without large sacrifices on downstream tasks.


4.3 Ablative study
In this section, we evaluate the contributions provided in the paper by eliminating parts of the loss function and study how each part affects the training in terms of target and sensitive accuracy. To this end, we used the best performing models after hyperparameter search when training for all contributions for each dataset. The models are trained with the same settings and architectures described in Sec.
4.1. We compare five different variations for each model alongside the baseline classifier:
Baseline: Training a deterministic classifier for the target task and evaluate the information leakage about the sensitive attribute.

w/o Entropy w/o KL: Neither entropy loss nor KL divergence are used in the loss. This case is similar ti multitask learning with the tasks being the classification of target and sensitive attributes.

Entropy + KL w/o Orth.: Entropy loss is used and disentangled loss is used with similar means. Hence, there might be some disentanglement of generative factors in the components of each latent code but no constraints are applied to force disentanglement of the two representations.

Entropy + KL Orth.: All contributions are included.
The results of the ablative study are shown in Figure 3.

For the sensitive class accuracy, it is desirable to have a lower accuracy in distinguishing sensitive attributes. Compared to the baseline, we observe that adding entropy loss and orthogonality constraints on the representations lowers the discriminative power of the learned representation regarding sensitive information. This is valid on all studied datasets except for CIFAR10 where orthogonality constraint without entropy produced better representations for hiding sensitive information with a small drop (
) on the target task performance. In the rest of the cases, having either entropy loss or KL loss only does not bring noticeable performance gains compared to a multitask learning paradigm. This could be attributed to the fact that orthogonality on its own does not enforce independence of random variables and another constraint is needed to encourage independent latent variables (
i.e. entropy loss). 
Comparing baseline with w/o Entropy w/o KL case answers the important question ”Does multitask learning with no constraints on representations bring any added value in mitigating sensitive information leakage?”. In three out of the five studied datasets, it is the case. We can see lower accuracy in identifying sensitive information by using the learned target representation as input to a classifier while having no constraints on the relationship between the sensitive and target representations during the training process of the encoder. Simply, adding an auxiliary classifier to the target classifier and force it to learn information about sensitive attributes hides some sensitive data from the target classifier.

Regarding target accuracy, the proposed model does not suffer from large drops in target performance when disentangling target from sensitive information. This could be seen by comparing target accuracy between the baseline and Entropy+KL Orth. columns. The largest drop in target performance compared to no privacy baseline is seen on the German dataset. This could be because of the very high dependence between gender and granting good or bad credit to a subject in the dataset and the small amount of subjects in the dataset.
4.4 Qualitative analysis
We visualize the learned embeddings using tSNE [14] projections for the extended YaleB and CIFAR10 datasets (cf. Fig. 4. We use the image space, , as inputs to the projection to visualize what type of information is held within each representation. We also show the label of each image with regards to the target task to make it easier to investigate the clusters. For the extended YaleB, we see that, using the image space , the images are clustered mostly depending on their illumination conditions. However, when using , the images are not clustered according lighting conditions but rather, mostly based on the subject identity. Moreover, the visualization of representation shows that the representation contains information about the sensitive class. For the CIFAR10 dataset, using the image space basically clusters the images on the dominant color. When using , it is clear that the target information is separated where the right side represent the nonliving objects, and the left to inside part represents the living objects. What should be observed in , is that within each target class, the fine classes are mixed and indistinguishable as we see cars, boats and trucks mixed in the right hand side of the figure, for example. The representation has some information about the target class and also has the residual information about the fine classes as we see in the annotated red rectangle. A group of horses images are clustered together, then few dogs’ images are clustered under it, then followed by birds. This shows that has captured some sensitive information while is more agnostic to the sensitive fine classes.
4.5 Sensitivity analysis
To analyze the effect of hyperparameters choices on the sensitive and target accuracy, we show heatmaps of how the performance changes when the studied hyperparameters are changed. The investigated hyperparameters are KL weight (), Entropy Weight (), KL gamma (), and Entropy gamma (). We show the results on the Adult dataset. We can see that the sensitive accuracy is sensitive to more than as changes in do not induce much change on the sensitive accuracy. A similar trend is not visible on the target accuracy. Regarding the choice of and , we can see that the sensitive leakage is highly affected by these hyperparameters and the results vary when changed. However, a more robust performance is observed on the target classification task.
5 Conclusion
In this work, we have proposed a novel model for learning invariant representations by decomposing the learned codes into sensitive and target representation. We imposed orthogonality and disentanglement constrains on the representations and forced the target representation to be uninformative of the sensitive information by maximizing sensitive entropy. The proposed approach is evaluated on five datasets and compared with state of the art models. The results show that our proposed model performs better than state of the art models on three datasets and performed comparably on the other two. We observe better hiding of sensitive information while affecting the target accuracy minimally. This goes inline with our hypothesis that decomposing the two representation and enforcing orthogonality could help with problem of information leakage by redirecting the information into the sensitive representation. One current limitation of this work is that it requires a target task to learn the disentanglement which could be avoided by learning the reconstruction as an auxiliary task.
References
 [1] Barocas, S., Hardt, M., Narayanan, A.: Fairness and Machine Learning. fairmlbook.org (2019), http://www.fairmlbook.org
 [2] Creager, E., Madras, D., Jacobsen, J.H., Weis, M.A., Swersky, K., Pitassi, T., Zemel, R.: Flexibly fair representation learning by disentanglement. arXiv preprint arXiv:1906.02589 (2019)
 [3] Dua, D., Graff, C.: Uci machine learning repository (2017)
 [4] Edwards, H., Storkey, A.: Censoring representations with an adversary. arXiv preprint arXiv:1511.05897 (2015)

[5]
Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence
23(6), 643–660 (2001)  [6] Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. pp. 2672–2680 (2014)

[7]
He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision. pp. 630–645. Springer (2016)
 [8] Kamiran, F., Calders, T.: Classifying without discriminating. In: 2009 2nd International Conference on Computer, Control and Communication. pp. 1–6. IEEE (2009)
 [9] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
 [10] Kingma, D.P., Welling, M.: Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
 [11] Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
 [12] Locatello, F., Abbati, G., Rainforth, T., Bauer, S., Schölkopf, B., Bachem, O.: On the fairness of disentangled representations. In: Advances in Neural Information Processing Systems. pp. 14584–14597 (2019)
 [13] Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.: The variational fair autoencoder. arXiv preprint arXiv:1511.00830 (2015)
 [14] Maaten, L.v.d., Hinton, G.: Visualizing data using tsne. Journal of machine learning research 9(Nov), 2579–2605 (2008)
 [15] Madras, D., Creager, E., Pitassi, T., Zemel, R.: Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309 (2018)
 [16] Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)
 [17] Moyer, D., Gao, S., Brekelmans, R., Galstyan, A., Ver Steeg, G.: Invariant representations without adversarial training. In: Advances in Neural Information Processing Systems. pp. 9084–9093 (2018)
 [18] Pedreshi, D., Ruggieri, S., Turini, F.: Discriminationaware data mining. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 560–568 (2008)

[19]
Quadrianto, N., Sharmanska, V., Thomas, O.: Discovering fair representations in the data domain. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8227–8236 (2019)
 [20] Roy, P.C., Boddeti, V.N.: Mitigating information leakage in image representations: A maximum entropy approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2586–2594 (2019)
 [21] Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through adversarial feature learning. In: Advances in Neural Information Processing Systems. pp. 585–596 (2017)
 [22] Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning. pp. 325–333 (2013)
 [23] Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. pp. 335–340 (2018)
Comments
There are no comments yet.