Pix2Pix is a common method in the image-to-image translation task. In the field of gesture recognition, previous studies employed Pix2Pix for translating color images to depth images to improve the accuracy of the original color images. However, they mainly focused on improving the quality of the generated images, ignoring the goal of classifying gestures. In this study, we propose a discriminative Pix2Pix for translating depth images from color images. Our motivation is to generate more understandable images for neural networks instead of those for humans. We introduce a new discriminator called Feature-level Discriminator (FLD). The original Pix2Pix discriminator can be considered a Image-level Discriminator (ILD). FLD distinguishes the extracted feature map of an image by a specified convolutional neural network (CNN), whereas ILD focuses more on images. We evaluate our approach on the OUHAND dataset, indicating that FLD can significantly improve the accuracy of the generated image and color image using a two-stream framework.