There are a few different logs that may be useful when debugging:

  • Server log: By default this is on the server in OUTPUT_DIR/turbinia-server.log

  • Worker logs: By default this is on the workers in OUTPUT_DIR/turbinia-worker.log

  • Task logs: These are logs that are generated by a Task running in the Worker. If you have GCS enabled, then these logs are saved there, and in either case, the paths can be determined with turbiniactl -a status (note that -a is required to show all data associated with the tasks).

  • Task execution logs. These are the logs that are generated by binaries executed from a Task. For example, Plaso generates its own log file, and that gets saved by the Task. As with the Task logs, these can be stored in GCS and retrieved with gsutil.

  • If you have deployed Turbinia in GKE and/or have STACKDRIVER_TRACEBACK enabled in the config, the logs and/or traceback exceptions will also be logged into the Stackdriver Console of the same project that Turbinia is in.

Request Debugging

Finding previous request data

By default turbiniactl will make a processing request and immediately return. It will print out the request ID when it makes the request, and you can use this to see the current status of the request with turbiniactl status -r <request id>. If you specify the -w flag (turbiniactl -w status -r <request id>), the client will wait for all Tasks to complete and then return. Killing the client will not affect the processing, and you can resume waiting for the task results with the same command.

If you do not have your request ID, you can list all recent requests like this:

$ turbiniactl status -i

# Turbinia report for Requests made within 7 days
* 2 requests were made within this timeframe.

## Request ID: 5198949e3fdb8dk569de1268000f97d8
* Last Update: 2021-02-10T08:39:27.446000Z
* Requester: turbiniauser
* Task Count: 441

## Request ID: 69de1268000f97d85198949e3fdb8dk5
* Last Update: 2021-01-12T07:09:03.123000Z
* Requester: turbiniauser
* Task Count: 43

No Jobs being created

If you create a processing request with turbiniactl and the client just hangs at the message Tasks completed (0/0): [], waiting for []., it’s possible that there was either a problem with the request getting to the server, or the server may have crashed. In this case take a look at the server logs to see if the server accepted the request (you can grep for the request ID which should be in the client output) and note any log messages around this point. If there was a crash of the server, there may be relevant log messages prior to the crash that are useful in determining the cause. If there are server crashes or other problems processing or validating a request, please file a Github Issue with the relevant details.

Task Debugging

Task Failures

The following is an example of debugging a task failure:

Running turbiniactl, we see a failure of the PsortTask (-d1 specifies history for the last day):

$ ./turbiniactl status -d1

Retrieved 2 Task results:
2017-12-06T15:09:15.013Z PsortTask Failed: Execution failed with status 1
2017-12-06T15:08:49.616Z PlasoTask Successful: Completed successfully in 0:00:14.178579 on

We can specify -a to get all info we have about that Task, and this will include the request Id, Task Id, and the other logs associated with the task.

$ ./turbiniactl -a status -d1

Retrieved 2 Task results:
2017-12-06T15:09:15.013Z request: None task: 1f8c4b321f444614bde296b0a00bb91f PsortTask Failed: Execution failed with status 1
2017-12-06T15:08:49.616Z request: b94ad420cf5a483fb3fc4409d5fc6dcd task: 020fe728a2b64dc7ba1da2656e1cc454 PlasoTask Successful: Completed successfully in 0:00:14.178579 on

Note that there may be local paths that show up in the turbiniactl status output, and these are local to the Worker that executed that Task.

Using the saved paths above we can display the output with gsutil.

$ gsutil cat gs://my-turbinia-bucket/output/1512601742-1f8c4b321f444614bde296b0a00bb91f-PsortTask/worker-log.txt

Running psort as [ --status_view none --logfile /var/tmp/1512601742-1f8c4b321f444614bde296b0a00bb91f-PsortTask/1f8c4b321f444614bde296b0a00bb91f.log -w /var/tmp/1512601742-1f8c4b321f444614bde296b0a00bb91f-PsortTask/1f8c4b321f444614bde296b0a00bb91f.csv /var/tmp/1512601730-b04f479ef9094954968474431aa30e8a-PsortTask/b04f479ef9094954968474431aa30e8a.csv]
Execution failed with status 1

Full Task report data

To see the report output along with the request status you can specify -R similar to: turbiniactl status -R -r <request id>. By default this output will be filtered to only show high priority Task report output (as determined by the Task at runtime) in order to filter out uninteresting report info. You can specify -p <prio num> to set what priority you want to show in the output report. To see all report output you can specify -p 100. To also show all saved files you can specify -a. Putting these together will show all output and can be quite a large amount of data: turbiniactl -a status -r <request id> -R -p 100.

Determining Task status

If you want to get a view of the total running Tasks, you can specify turbiniactl status -w to see what Tasks are running or queued on each of the Workers. If you want to see all of the completed Tasks, you can also specify -a to include them: turbiniactl -a status -w.

Writing debug logs for binary dependencies

Some Turbinia tasks execute binaries that can also write their own debug logs as part of execution. This can be expensive so it is turned off by default. You can specify -T during the request to enable these logs temporarily per-request (e.g. turbiniactl -T googleclouddisk -d disk-to-process). Alternately you can set DEBUG_TASKS = True in the config file for the Worker.

Disappearing Tasks

If the Turbinia server is scheduling Tasks, but you’re not seeing the Tasks or their output for some reason, the most common scenario is that either a second Turbinia server or a second set of Workers is running and operating on the same PubSub queues. Make sure that you have unique values per each Turbinia instance for the following config variables: PSQ_TOPIC, INSTANCE_ID, PUBSUB_TOPIC and KOMBU_CHANNEL (if you are using a local Turbinia installation type.

Common Errors

  • If you encounter a cycle of errors on the Turbinia server when decoding specific malformed pubsub messages, and the failure happens prior to the server acknowledging the pubsub message, you may need to manually clear the pubsub queue. If you are sure there aren’t other outstanding requests, you can just pull and auto-ack the existing messages, otherwise you may need a variation of this command to selectively ack messages:

gcloud beta pubsub subscriptions pull --auto-ack $my-pubsub-subscription-name