Production of high-quality virtual reality content from real sensed data is a challenging task due to several factors such as calibration of multiple cameras and rendering of virtual views. In this paper, we present a pipeline that maximizes the performance of virtual view rendering from an imagery captured by a camera equipped with fisheye lens optics. While such optics offer a wide field-of-view, it also introduces specific distortions. These have to be taken into account while rendering virtual views for a target application (e.g., head-mounted displays). We integrate a generic camera model into a fast rendering pipeline where we can tune intrinsic and extrinsic camera parameters along with resolution to meet the device or user requirements. We specifically target CPU-based implementation and quality in par with GPU-based rendering approaches. Using the adopted generic camera model, we numerically tabulate the required backward projection mapping and store it in a look-up table. This approach offers a tradeoff between memory and computational complexity in terms of operations for calculating the mapping values. Finally, we complement our method with an interpolator, which handles occlusions efficiently. Experimental results demonstrate the viability, robustness and accuracy of the proposed pipeline.