Juniper vSRX on Proxmox VE
Juniper provides a JUNOS, based on the one used by the SRX series, than can be used in a virtual machine. That product is great for Juniper users that want to play with their favorite network OS and also for people who would like to discover the JUNOS world.
Juniper is providing images for VMware and KVM based hypervisors. As Proxmox VE user you know that it uses KVM to get things done. So, having Firefly Perimeter working on Proxmox VE should be doable without much troubles. But here are the steps to get things working.
Downloading vSRX (Firefly Perimeter)
To setup vSRX on Proxmox VE we need to download the JVA file provided by Juniper. This file is an archive containing the KVM VM definition and the QCOW2 disk of the VM.
Preparing the VM
We then need to create a VM with the following characteristics (see also the end of this article):
- OS: Other OS types (other)
- CD/DVD: Do not use any media
- Hard Disk: VIRTIO0 or IDE0, size of 2 GB, QCOW2 format
- CPU: at least 2 sockets and 1 core, type KVM64 (default on latest versions of Proxmox VE)
- Memory: 1024 MB are recommended (but 2048 MB should be better)
- Network: maximum of 10 interfaces, use VIRTIO or Intel E1000 as model for interfaces
Using the vSRX Disk
Now that the VM definition has been created, we need to use the disk provided in the JVA file. For that we first need to extract it.
# bash junos-vsrx-12.1X47-D10.4-domestic.jva -x
The disk will be available in the directory that has been created. We justneed to copy the disk to replace the one used by the VM (replace VMID by the ID of your VM).
# cp junos-vsrx-12.1X47-D10.4-domestic.img /var/lib/vz/images/VMID/vm-VMID-1.qcow2
With this, the VM is now bootable and JUNOS will load properly, we will not be able to use it though. For that we need to find a way to send the serial output to the Proxmox VE's noVNC console.
Getting the serial output in the Proxmox VE console
First we need to find where our VM definition is stored. Usually it is under /etc/pve/nodes/NODENAME/qemu-server/VMID.conf (replace NODENAME and VMID with your owns). But we can use a command like the following:
# find / -name 'VMID.conf'
Then we can edit the VM definition file:
# vim /etc/pve/nodes/NODENAME/qemu-server/VMID.conf
And we have to add the following line in the configuration:
args: -serial tcp:localhost:6000,server,nowait
And eventually, we need to change the VM display to use Cirrus Logic GD 5446 (cirrus) via the Proxmox VE web interface or just by adding vga: cirrus in the VM definition.
We can now just start the VM, the output will be displayed in the Proxmox VE's console. Enjoy using JUNOS with virtual machines.
After some tests I was glad to see that both disk and network interfaces can use the VIRTIO drivers. I would recommend to use this type of drivers since it is supposed to improve the scheduling on the hypervisor level.