1 Introduction
1.1 “One Size Fits All” Denoisers
The following phenomenon could be familiar to those who develop learningbased image denoisers. If the denoiser is trained at a noise level , then its performance is maximized when the testing noise level is also . As soon as the testing noise level deviates from the training noise level, the performance drops (Choi et al., 2019; Kim et al., 2019). This is a typical mismatch between training and testing, which is arguably universal for all learningbased estimators. When such a problem arises, the most straightforward solution is to create a suite of denoisers trained at different noise levels and use the one that matches best with the input noisy image (such as those used in the “PlugandPlay” priors (Zhang et al., 2017; Chan et al., 2016)). However, this ensemble approach is not effective since the model capacity is multiple times larger than necessary.
A more widely adopted solution is to train one denoiser and use it for all noise levels. The idea is to train the denoiser using a training dataset containing images of different noise levels. The competitiveness of these “one size fits all” denoisers compared to the best individually trained denoisers has been demonstrated in (Zhang et al., 2017, 2018; Mao et al., 2016; Remez et al., 2017). However, as we will illustrate in this paper, there is no guarantee for such arbitrarily trained onesizefitsall denoiser to have a consistent performance over the entire noise range. At some noise levels, usually at the lower tail of the noise range, the performance could be much worse than the best individuals. The cause of this phenomenon is related to how we draw the noisy samples, which is usually uniform across the noise range. The question we ask here is that if we allocate more lownoise samples and fewer highnoise samples, will we be able to get a more consistent result?
1.2 Objective and Contributions
The objective of this paper is to find a sampling distribution such that for every noise level the performance is consistent. Here, by consistent we meant that the gap between the estimator and the best individuals is balanced. The idea is illustrated in Figure 1. The black curve in the figure represents the ensemble of the best individually trained denoisers. It is a virtual curve obtained by training the denoiser at each noise level. A typical “one size fits all” denoiser is trained by using noisy samples from a uniform distribution, which is denoted by the blue curve. This figure illustrates a typical inconsistence where there is a significant gap at lownoise but small gap at high noise. The objective of the paper is to find a new sampling distribution (denoted by the orange bars) such that we can achieve a consistent performance throughout the entire range. The result returned by our method is a tradeoff between the overall performance and the worst cases scenarios. Our goal is to characterize this tradeoff.
The key idea behind the proposed method is a minimax formulation. This minimax optimization minimizes the overall risk of the estimator subject to the constraint that the worst case performance is bounded. We show that under the standard convexity assumptions on the set of all admissible estimators, we can derive a provably convergent algorithm by analyzing the dual. For estimators whose admissible set is not convex, solutions returned by our dual algorithm are the convexrelaxation results. We present the algorithm, and we show that steps of the algorithm can be implemented by iteratively updating the sample distributions.
2 Related Work
While the above sampling distribution problem may sound familiar, its solution does not seem to be available in the computer vision and machine learning literature.
Image Denoising. Recent work in image denoising has been focusing on developing better neural network architectures. When encountering multiple noise levels, (Zhang et al., 2017) presented two approaches: Create a suite of denoisers at different noise levels, or train a denoiser by uniformly sampling noise levels from the range. For the former approach, (Choi et al., 2019) proposed to combine the estimators by solving a convex optimization problem. (Gharbi et al., 2016) proposed an alternative approach by introducing a noise map as an extra channel to the network. Our paper shares the same overall goal as (Kim et al., 2019). However, they address problem by modifying the network structure whereas we do not change the network.
Active Learning / Experimental Design
. Adjusting the distribution of the training samples during the learning procedure is broadly referred to active learning in machine learning
(Settles, 2009) or experimental design in statistics (Chaloner and Verdinelli, 1995). Active learning / experimental design are typically associated with limited training data (Gal et al., 2017; Sener and Savarese, 2018). The goal is to optimally select the next data point (or batch of data points) so that we can estimate the model parameters, e.g., the mean and variance. The problem we encounter here is not about limited data because we can synthesize as much data as we want since we know the image formation process. The challenge is how to allocate the synthesized data.
Constrained Optimization in Neural Network. Training neural networks under constraints have been considered in classic optimization literature (Platt and Barr, 1987) (Zak et al., 1995). More recently, there are optimization methods for solving inequality constrained problems in neural networks (Pathak et al., 2015), and equality constrained problems (MárquezNeila et al., 2017). However, these methods are generic approaches. The convexity of our problem allows us to develop a unique and simple algorithm.
Fairness Aware Classification
. The task of seeking “balanced samples” can be considered as improving the fairness of the estimator. Literature on fairness aware classification is rapidly growing. These methods include modifying the network structure, the data distribution, and loss functions
(Zafar et al., 2015; Pedreshi et al., 2008; Calders and Verwer, 2010; Hardt et al., 2016; Kamishima et al., 2012). Putting the fairness as a constrained optimization has been proposed by (Zafar et al., 2017), but their problem objective and solution are different from ours.3 Problem Formulation
3.1 Training and Testing Distributions: and
Consider a clean signal . We assume that this clean signal is corrupted by some random process to produce a corrupted signal . The parameter can be treated in a broad sense as the level of uncertainty. The support of is denoted by the set . We assume that
is a random variable with a probability density function
.Examples. In a denoising problem, the image formation model is given by where
is a zeromean unitvariance i.i.d. Gaussian noise vector. The noise level is measured by
. For image deblurring, the model becomes where denotes the blur kernel with radius , “” denotes convolution, and is the noise. In this case, the uncertainty is associated with the blur radius.We focus on learningbased estimators. We define an estimator as a mapping that takes a noisy input and maps it to a denoised output . We assume that is parametrized by , but for notation simplicity we omit the parameter when the context is clear. The set of all admissible ’s is denoted as .
To train the estimator , we draw training samples from the set , where refers to the th training sample, and is the distribution of the noise levels in the training samples. Note that is not necessarily the same as . The distribution is the distribution of the training samples, and the distribution is the distribution of the testing samples. In most learning scenarios, we want to match with so that the generalization error is minimized. However, in this paper, we are purposely designing a which is different from because the goal is to seek an optimal tradeoff. To emphasize the dependency of on , we denote as .
3.2 Risk and Conditional Risk: and
Training an estimator requires a loss function. We denote the loss between a predicted signal and the truth as . An example of the loss function is the Euclidean distance:
(1) 
Other types of loss functions can also be used as long as they are convex in .
To quantify the performance of the estimator , we define the notion of conditional risk:
(2) 
The conditional risk can be interpreted as the risk of the estimator evaluated at a particular noise level . The overall risk is defined through iterated expectation:
(3) 
Note that the expectation of is taken with respect to the true distribution since we are evaluating the estimator .
3.3 Three Estimators: , and
The estimator is determined by minimizing the training loss. In our problem, since the training set follows a distribution , it holds that is determined by
(4) 
This definition can be understood by noting that is the conditional risk evaluated at . Since specifies the probability of obtaining a noisy samples with noise level , the integration in (4) defines the training loss when the noisy samples are proportional to . Therefore, by minimizing this training loss, we will obtain .
Example. Suppose that we are training a denoiser over the range of . If the training set contains samples whose noise levels are uniformly distributed, i.e., for and is 0 otherwise, then is obtained by minimizing the sum of the individual losses where the training samples have noise levels equally likely coming from the range of .
If we replace the training distribution by the testing distribution , then we obtain the following estimator:
(5) 
Since minimizes the overall risk, we expect for all . This is summarized in the lemma below.
Lemma 1.
The risk of is a lower bound of the risk of all other :
(6) 
Proof.
By construction, is the minimizer of the risk according to (5), it holds that . Therefore, for any we have . ∎
The consequence of Lemma 1 is that if we minimize without any constraint, we will reach a trivial solution of . This explains why this paper is uninteresting if the goal is to purely minimize the generalization error without considering any constraint.
Before we proceed, let us define one more distribution which has a point mass at a particular , i.e., is a delta function such that . This estimator is found by simply minimizing the training loss
(7) 
which is equivalent to minimizing the conditional risk . Because we are minimizing the conditional risk at a particular , gives the best individual estimate at . However, having the best estimate at does not mean that can generalize. It is possible that performs well for one but poorly for other ’s. However, the ensemble of all these pointwise estimates will form the lower bound of the conditional risks such that at every .
3.4 Main Problem (P1)
We now state the main problem. The problem we want to solve is the following constrained optimization:
(P1)  
The objective function reflects our original goal of minimizing the overall risk. However, instead of doing it without any constraint (which has a trivial solution of ), we introduce a constraint that the gap between the current estimator and the best individual is no worse than , where is some threshold. The intuition here is that we are willing to sacrifice some of the overall risk by limiting the gap between and so that we have a consistent performance over the entire range of noise levels.
4 Dual Ascent
In this section we discuss how to solve (P1). Solving (P1) is challenging because minimizing over involves updating the estimator which could be nonlinear w.r.t. the loss. To address this issue, we first show that as long as the admissible set is convex, (P1) is convex even if the estimators themselves are nonconvex. We then derive an algorithm to solve the dual problem.
4.1 Convexity of (P1)
We start by showing that under mild conditions, (P1) is convex.
Lemma 2.
Let be a closed and convex set. Then, for any convex loss function , the risk and the conditional risk are convex in , for any .
Proof.
Let and be two estimators in and let be a constant. Then, by the convexity of , the conditional risk satisfies
which is convex. The overall risk is found by taking the expectation of the conditional risk over . Since taking expectation is equivalent to integrating the conditional risk times the distribution (which is positive), convexity preserves and so is also convex. ∎
We emphasize that the convexity of is defined w.r.t. and not the underlying parameters (e.g., the network weights). For any convex combination of the parameters ’s, we have that because is not necessarily convex.
The following corollary shows that the optimization problem (P1) is convex.
Corollary 1.
Let be a closed and convex set. Then, for any convex loss function , (P1) is convex in .
Proof.
Since the objective function is convex (by Lemma 2), we only need to show that the constraint set is also convex. Note that the “sup” operation is equivalent to requiring for all . Since is constant w.r.t. , we can define so that the constraint becomes . Consequently the constraint set is convex because the conditional risk is convex. ∎
The convexity of is subtle but essential for Lemma 2 and Corollary 1. In a standard optimization over , the convexity is granted if the admissible set is an interval in . In our problem, denotes the set of all admissible estimators, which by construction are parametrized by . Thus, the convexity of requires that a convex combination of two admissible ’s remains admissible. All estimators based on generalized linear models satisfy this property. However, for deep neural networks it is generally unclear how the topology looks like although some recent studies are suggesting negative results (Petersen et al., 2018). Nevertheless, even if is nonconvex, we can solve the dual problem which is always convex. The dual solution provides the convexrelaxation of the primal problem. The duality gap is zero when the Slater’s condition holds, i.e., when is convex and is chosen such that the constraint set is strictly feasible.
4.2 Dual of (P1)
Let us develop the dual formulation of (P1). The dual problem is defined through the Lagrangian:
(8) 
by which we can determine the Lagrange dual function as
(9) 
and the dual solution:
(10) 
Given the dual solution , we can translate it back to the primal solution by minimizing the inner problem in (10), which is
(11) 
This minimization is nothing but training the estimator using samples who noise levels are distributed according to . ^{1}^{1}1For to be a legitimate distribution, we need to normalize it by the constant . But as far as the minimization in (11) is concerned, the constant is unimportant. Therefore, by solving the dual problem we have simultaneously obtained the distribution , which is , and the estimator trained using the distribution .
4.3 Dual Ascent Algorithm
The algorithm for solving the dual is based on the fact that the pointwise is concave in . As such, one can use the standard dual ascent method to find the solution. The idea is to sequentially update ’s and ’s via
(12)  
(13) 
Here, is the step size of the gradient ascent step, and returns the positive part of the argument. At each iteration, (12) is solved by training an estimator using noise samples drawn from the distribution . The step in (13) computes the conditional risk and updates .
Since the dual is convex, the dual ascent algorithm is guaranteed to converge to the dual solution using an appropriate step size. We refer readers to standard texts, e.g., (10).
5 Uniform Gap
The solution of (P1) depends on the tolerance . This tolerance cannot be arbitrarily small, or otherwise the constraint set will become empty. The smallest which still ensures a nonempty constraint set is defined as . The goal of this section is to determine and discuss its implications.
5.1 The Uniform Gap Problem (P2)
The motivation of studying the socalled Uniform Gap problem is the inadequacy of (P1) when the tolerance is larger than (i.e., we tolerate more than needed). The situation can be understood from Figure 2. For any allowable , the solution returned by (P1) can only ensure that the largest gap is no more than . It is possible that the highends have a significantly smaller gap than the lowends. The gap will become uniform only when which is typically not known apriori.
If we want to maintain a constant gap throughout the entire range of , then the optimization goal will become minimizing the maximum risk gap and not worry about the overall risk. In other words, we solve the following problem:
(P2) 
When (P2) is solved, the corresponding risk gap is exactly , defined as
(14) 
The supremum in the above equation can be lifted because by construction, (P2) guarantees a constant gap for all .
The difference between (P2) and (P1) is the switched roles of the objective function and the constraint. In (P1), the tolerance defines a usercontrolled upper bound on the risk gap, whereas in (P2) the is eliminated. Note that the omission of in (P2) does not imply better or worse since (P1) and (P2) are serving two different goals. (P1) utilizes the underlying testing distribution whereas (P2) does not. It is possible that is skewed towards high noise scenarios so that a constant risk gap will suffer from insufficient performance at highnoise and overperform at lownoise which does not matter because of .
5.2 Algorithm for Solving (P2)
The algorithm to solve (P2) is slightly different from that of (P1) because of the omission of the constraint.
We first rewrite problem (P2) as
(15)  
Then the Lagrangian is defined as
(16)  
Minimizing over and yields the dual function:
(17)  
Consequently, the dual problem is defined as
(18)  
Again, if is convex then solving the dual problem (18) is necessary and sufficient to determine the primal problem (15) which is equivalent to (P2). The dual problem is solvable using the dual ascent algorithm, where we update and according to the following sequence:
(19)  
(20)  
(21) 
Here, (19) solves the inner optimization in (18) by fixing a , and (20) is a gradient ascent step for the dual variable. The normalization in (21) ensures that the constraint of (18) is satisfied. The nonnegativity operation in (20) can be lifted because by definition for all . The final sampling distribution is .
6 Practical Considerations
The actual implementation of the dual ascent algorithms for (P1) and (P2) require additional modifications. We list a few of them here.
Finite Epochs. In principle, the subproblems in (12) and (19) are determined by training a network completely using the sample distributions at the th iteration and
, respectively. However, in practice, we can reduce the training time by training the network inexactly. Depending on the specific network architecture and problem type, the number of epochs varies between 10  50 epochs per dual ascent iteration.
Discretize Noise Levels. The theoretical results presented in this paper are based on continuous distributions and . In practice, a continuum is not necessary since nearby noise levels are usually indistinguishable visually. As such, we discretize the noise levels in a finite number of bins so that the integration can be simplified to summation.
Interpolate Best Individuals. The theory above require knowledge of the best individuals at all ’s which is computationally infeasible. We approximate this by first obtaining a set of values at several specific
’s. This involves training the network separately for a few noise levels. Afterwards, a simple linear interpolation can be used to predict
at ’s that are not trained. Since the function is typically smooth, linear interpolation is reasonably accurate.Scale Constraints. Most image restoration applications measure the restoration quality in the logscale, e.g., the peak signaltonoise ratio (PSNR) which is defined as where MSE is the mean squared error. Learning in the logscale can be achieved by enforcing constraint in the logspace.
We define the the scale risk function as:
(22) 
With this definition, it follows the the constraints in the logscale are represented as . To turn this logscale constraint into a linear form, we use the follow lemma by exploiting the fact that the risk gap is typically small.
Lemma 3.
The logscale constraint
(23) 
can be approximated by
(24) 
where is a constant (w.r.t. ) such that the log of equals :
(25) 
Proof.
First, we observe that is a deterministic quantity and is independent of . Using the fact that is a deterministic constant, we can show that
where we used the fact that so that . Putting these into the constraint and rearranging the terms completes the proof. ∎
The consequence of the above analysis leads to the following approximate problem for training in the logscale:
(P1log)  
The implication of (P1log) is that the optimization problem with scale constraints can be solved using the linearscale approaches. Notice that the new distribution is now . The other change is that we replace with , which are determined offline.
7 Experiments
We evaluate the proposed framework through two experiments. The first experiment is based on a linear estimator where analytic solutions are available to verify the dual ascent algorithm. The second experiment is based on training a real deep neural network.
7.1 Linear Estimator
We consider a linear (scalar) estimator so that we can access the analytic solutions. We define the clean signal as and the noisy signal as , where . The estimator we choose here is for some parameter depending on the underlying sampling distribution .
Because of the linear model formulation, we can train the estimator using closedform equation as
where . Substituting into the loss we can show that the conditional risk is
Based on this condition risks, we can run the dual ascent algorithm to alternatingly estimate and according to (P1). Figure 3 shows the conditional risks returned by different iterations of the dual ascent algorithm. In this numerical example, we let and . Observe that as the dual ascent algorithm proceeds, the worst case gap is reducing ^{2}^{2}2The small gap in the middle of the plot is intrinsic to this problem, since for any there always exists a such that . At this , the conditional risk will always touch the ideal curve.. When the algorithm converges, it matches exactly with the theoretical solution.
Noise level ()  010  1020  2030  3040  4050  5060  6070  7080  8090  90100 
Ideal (Best Individually Trained Denoisers)  
PSNR  38.04  31.73  29.23  27.72  26.66  25.86  25.24  24.70  24.25  23.84 
Uniform Distribution  
Distribution  10.0%  10.0%  10.0%  10.0%  10.0%  10.0%  10.0%  10.0%  10.0%  10.0% 
PSNR  37.24  31.41  29.04  27.60  26.58  25.81  25.19  24.67  24.23  23.84 
Solution to (P1) with 0.4dB gap  
Distribution  32.7%  12.0%  9.4%  7.9%  6.8%  6.3%  6.4%  6.2%  6.2%  6.1% 
PSNR  37.64  31.46  29.03  27.58  26.56  25.78  25.15  24.63  24.19  23.80 
Solution to (P2)  
Distribution  81.3%  7.6%  3.4%  2.0%  1.3%  1.0%  0.9%  0.9%  0.8%  0.8% 
PSNR  37.86  31.54  29.06  27.57  26.53  25.74  25.10  24.57  24.12  23.70 
7.2 Deep Neural Networks
The second experiment evaluates the effectiveness of the proposed framework on real deep neural networks for the task of denoising. We shall focus on the MSE loss with PSNR constraints, although our theory applies to other loss functions such as SSIM (Zhou Wang et al., 2004) and MSSSIM (Wang et al., 2003) also as long as they are convex. The noise model we assume is that , where with (w.r.t. an 8bit signal of 256 levels). The network we consider is a 20layer DnCNN (Zhang et al., 2017). We choose DnCNN just for demonstration. Since our framework does not depend on a specific network architecture, the theoretical results hold regardless the choice of the networks.
The training procedure is as follows. The training set consists of 400 images from the dataset in (Martin et al., 2001). Each image has a size of . We randomly crop patches from these images to construct the training set. The total number of patches we used is determined by the minibatch size of the training algorithm. Specifically, for each dual ascent iteration we use 3000 minibatches where each batch consists of 128 patches. This gives us 384k training patches per epoch. To create the noisy training samples, for each patch we add additive i.i.d. Gaussian noise where the noise level is randomly drawn from the distribution . The noise generation process is done online. We run our proposed algorithm for 25 dual ascent iterations, where each iteration consists of 10 epochs. For computational efficiency, we break the noise range into 10 equally sized bins. For example, a uniform distribution corresponds to allocating 10% of the total number of training samples per bin. The validation set consists of 12 “standard images” (e.g., Lena). The testing set is the BSD68 dataset (Roth and Black, 2005), tested individually for every noise bin. The testing distribution for (P1) is assumed to be uniform. Notice that (P2) does not require the testing distribution to be known.
The average PSNR values (conditional on ) are reported in Table 1 and the performance gaps are illustrated in Figure 4. Specifically, the first two rows of the Table show the PSNR of the best individually trained denosiers and the uniform distributions. The proposed sampling distributions and the corresponding PSNR values are shown in the third row for (P1) and the fourth row for (P2). For (P1), we set the tolerance level as 0.4dB. Table 1 and Figure 4 confirm the validity of our method. A more interesting observation is the percentages of the training samples. For (P1), we need to allocate 32.7% of the data to lownoise, and this percentage goes up to 81.3% for (P2). This suggests that the optimal sampling distribution could be substantially different from the uniform distribution we use today.
8 Conclusion
It is important to note that onesizefitsall denosiers are playing a tradeoff between highnoise and lownoise cases. The uniform gap returned by (P2) is not necessarily “better” because the solution is agnostic to the underlying distribution . If we know , then the optimal distribution should be determined by (P1). Nevertheless, the proposed framework has addressed a useful question of how to draw samples for onesizefitsall denoisers. The convexity of the problem, the minimax formulation, and the dual ascent algorithm appear to be general for all learningbased estimators. The idea is also likely to be applicable to adversarial training in classification tasks.
Acknowledgement
The work is supported, in part, by the US National Science Foundation under grants CCF1763896 and CCF1718007.
The authors thank Yash Sanghvi and Guanzhe Hong for invaluable discussions on this paper. The authors also thank the anonymous reviewers for the constructive feedback which significantly improved the paper.
References

Three naive Bayes approaches for discriminationfree classification
. Data Mining and Knowledge Discovery 21 (2), pp. 277–292. Cited by: §2.  Bayesian experimental design: a review. Statistical Science, pp. 273–304. Cited by: §2.
 PlugandPlay ADMM for image restoration: fixedpoint convergence and applications. IEEE Transactions on Computational Imaging 3 (1), pp. 84–98. Cited by: §1.1.
 Optimal combination of Image Denoisers. IEEE Transactions on Image Processing 28 (8). Cited by: §1.1, §2.
 Deep Bayesian active learning with image data. In International Conference on Machine Learning, Vol. 70, pp. 1183–1192. Cited by: §2.
 Deep Joint Demosaicking and Denoising. ACM Transactions on Graphics 35 (6), pp. 191:1–191:12. Cited by: §2.

Equality of opportunity in supervised learning
. In Advances in Neural Information Processing Systems, pp. 3315–3323. Cited by: §2. 
Fairnessaware classifier with prejudice remover regularizer
. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 35–50. Cited by: §2. 
Adaptively tuning a convolutional neural network by gate process for image denoising
. IEEE Access 7 (), pp. 63447–63456. Cited by: §1.1, §2.  [10] Machine Learning 10725, CMU. External Links: Link Cited by: §4.3.
 Image restoration using very deep convolutional encoderdecoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems, pp. 2802–2810. Cited by: §1.1.
 Imposing hard constraints on deep networks: promises and limitations. arXiv preprint arXiv:1706.02025. Cited by: §2.
 A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In International Conference on Computer Vision, Vol. 2, pp. 416–423. Cited by: §7.2.
 Constrained Convolutional Neural Networks for weakly supervised segmentation. In International Conference on Computer Vision, pp. 1796–1804. Cited by: §2.
 Discriminationaware data mining. In International Conference on Knowledge Discovery and Data Mining, pp. 560–568. Cited by: §2.
 Topological properties of the set of functions generated by neural networks of fixed size. Note: arXiv:1806.08459 Cited by: §4.1.
 Constrained differential optimization. In International Conference on Neural Information Processing Systems, pp. 612–621. Cited by: §2.
 Deep classaware image denoising. In International Conference on Image Processing, pp. 1895–1899. Cited by: §1.1.

Fields of experts: a framework for learning image priors.
In
Computer Vision and Pattern Recognition
, Vol. 2, pp. 860–867. Cited by: §7.2.  Active learning for convolutional neural networks: a coreset approach. In International Conference on Learning Representations, Cited by: §2.
 Active learning literature survey. Technical report University of WisconsinMadison Department of Computer Sciences. Cited by: §2.
 Multiscale structural similarity for image quality assessment. In The ThritySeventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2, pp. 1398–1402. Cited by: §7.2.
 Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In International World Wide Web Conference, pp. 1171–1180. Cited by: §2.
 Fairness constraints: mechanisms for fair classification. arXiv preprint arXiv:1507.05259. Cited by: §2.

Solving Linear Programming problems with Neural Networks: a comparative study
. IEEE Transactions on Neural Networks 6 (1), pp. 94–104. Cited by: §2.  Learning deep CNN denoiser prior for image restoration. In Computer Vision and Pattern Recognition, pp. 2808–2817. Cited by: §1.1, §1.1, §2, §7.2.
 FFDNet: toward a fast and flexible solution for CNNbased image denoising. IEEE Transactions on Image Processing 27 (9), pp. 4608–4622. Cited by: §1.1.
 Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. Cited by: §7.2.
Comments
There are no comments yet.