Deep neural networks are known to be susceptible to adversarial perturbations, small perturbations that alter the network's output and exist under strict norm limitations. Universal adversarial perturbations aim to alter the model's output on a set of out-of-sample data and present a more realistic use case, as awareness of the model's exact input is not required. In addition, patch adversarial attacks denote the setting where the adversarial pertubations are limited to consist of patches with a given shape and number. This work studies realistic applications of adversarial attacks on visual-based models and the robustness of inference-based defenses. We first consider a randomized smoothing-based defense and show that adversarial attacks can generalize to distributions of inputs and models. We then optimize physical passive patch universal adversarial attacks on visual odometry-based autonomous navigation systems. A patch adversarial perturbation poses a severe security issue for such navigation systems and can mislead them onto some collision course. Finally, we consider the optimal placement of multiple such patches and, to the best of our knowledge, present the first direct solution to optimizing the locations and pertubations of multiple patches.