There is an increase of focus on both style transfer and user interaction based image editing. Commercial apps provide users means to edit their images to their liking. Even though two-step methods exist that allow the user to perform signif-icant changes to their images and then perform style trans-fer, these methods tend to distort the image and remove theaesthetic beauty the user desires. We propose a new Genera-tive Adversarial Network (GAN), called FRGAN, that maintains the user suggested changes to the image and performs style transfer retaining the changes. We qualitatively demonstrate the efficacy of FRGAN formulation over various two-step GAN methods and traditional style transfer methods. We utilize the Mean of Opinion Score (MOS) metric to quantify our proposed models’ performance.