Project

Profile

Help

Issue #1854

closed

CVE-2016-3696 Leakage of CA key in pulp-qpid-ssl-cfg

Added by rbarlow over 8 years ago. Updated over 5 years ago.

Status:
CLOSED - CURRENTRELEASE
Priority:
High
Assignee:
Category:
-
Sprint/Milestone:
-
Start date:
Due date:
Estimated time:
Severity:
2. Medium
Version:
Platform Release:
2.8.5
OS:
Triaged:
Yes
Groomed:
No
Sprint Candidate:
No
Tags:
Pulp 2
Sprint:
Quarter:

Description

Sander Bos reported that the pulp-qpid-ssl-cfg script creates certificate files and NSS database files in world-readable unsafe temporary directory $DIR, from which is than the content copied to permanent installation directory $INST_DIR with wrongly assigned permissions, which are corrected only after the copying process is done. This bug gives attacker a time frame for stealing sensitive data.

Thanks to Sander Bos for reporting the issue, and to Adam Mariš for analysing the issue and writing the description included above.

From the initial report by Sander Bos (copied here with permission):

1. server/bin/pulp-qpid-ssl-cfg

a) Race conditions (candidate for a CVE):

Files are first copied into "INST_DIR" (a user-traversable directory,
as I understand it), only after that chmod(1) is called:

   mkdir -p $INST_DIR
   mkdir -p $INST_DIR/nss
   cp $DIR/*.crt $INST_DIR
   cp $DIR/*.db $INST_DIR/nss
   cp $DIR/$PWDFILE $INST_DIR/nss

   # update perms
   chmod 640 $INST_DIR/*.crt
   [...]
   chmod 640 $INST_DIR/nss/*

Proposed fix: use a "umask 077" prior to the cp(1) (or even the mkdir(1))
calls, or perhaps even put such "umask 077" on top to cover the whole
script (including other mkdir(1) calls, for example).

b) Unsafe creation of, and dangerous rm(1) operation on, a temporary
   directory (at least the unsafe creation part is a candidate for a CVE):

   DIR="/tmp/tmp$RANDOM
   [...]
   # create temporary db directory
   rm -rf $DIR
   mkdir $DIR

First off, assuming this is the reason why this script handles the
directory this way: "rm -rf $DIR" is a terrible way to ensure a directory
"does not exist and will be newly created".

Secondly: executing "rm -rf" on a variable-defined directory can have
unforeseen consequences.  The directory may legitimately exist, in which
case a directory simply gets deleted.  This is bad.

Alternatively, as an attack scenario, the directory may purposely have
been created by a user prior to the script's execution.  This means the
directory is fully user-controllable: the user is able to define its
to-be-deleted contents including file names, file types, amount of files
(think DoS when constantly recreating new files), et cetera.

Even though rm(1) deletes a symbolic link and not the file (or directory)
it links to, there might be attack types possible involving symbolic
links.  Or, a user might have a user mount point defined on the directory,
perhaps in the worst case even possibly an overlay or loop mount for
"/" or a different directory for example.

Thus, several attack types can be imagined, possibly even ending up in an
"rm -rf" on an arbitrary directory, e.g., "/".

(The scripts server/bin/pulp-gen-ca-certificate and
nodes/common/bin/pulp-gen-nodes-certificate used to have "rm -rf"
on a variable directory ("rm -rf $TMP") as well (as clean-up, not to
ensure it can be newly created), but both scripts have that replaced by
alternatives by now.)

As said, /tmp/tmp$RANDOM may have been created by a user prior to
execution of the script, ending up in files created by the script being
readable or controllable in other ways by the user.

However, a race condition also exists in that the directory
/tmp/tmp$RANDOM could have been created in between the "rm -rf $DIR" and
"mkdir $DIR" (or, alternatively, a symbolic link with the name could be
put in there and link to an arbitrary file or directory).

If the user for example creates many files in /tmp/tmp$RANDOM, the rm(1)
part will take long, the user can then check the process table to see
what the actual /tmp/tmp$RANDOM directory is that is being deleted,
then check for the existence of such rm(1) process in a loop and once
that process does not exist anymore immediately re-create the directory.

It's also possible (and easier) to make the "rm -rf $DIR" part fail
by recursively creating new files in $DIR (after the rm(1) started).
This way, "rm -rf $DIR" will fail with: "rm: cannot remove: <directory>
Directory not empty".  The directory stays in its place.

Getting to the attack effect: various "certutil -d" calls operate in
the directory, which means that for example certificates could end up
being readable and being placed in such user-controlled directory, or
important system files (e.g., /etc/passwd) could be overwritten when
symbolic links to such files exist in /tmp/tmp$RANDOM.

Proposed fix: use mktemp(1):

replace:

   - DIR="/tmp/tmp$RANDOM
   - [...]
   - # create temporary db directory
   - rm -rf $DIR
   - mkdir $DIR

by:

   + DIR=$(mktemp -d) || exit 1

Also available in: Atom PDF