Vault - Site backup and migration
tool_vault
Vault — Site backup and migration (tool_vault) is an admin tool that performs full-site backups (database + dataroot + filedir) to a remote service (lmsvault.io by default) and restores them on the same or a different Moodle site. It exposes a UI under Site administration → Server → Vault, three CLI scripts (cli/site_backup.php, cli/site_restore.php, cli/list_remote_backups.php), a scheduled cron task that drives queued backup/restore operations, an unauthenticated progress.php page protected by a 32-character random access key, and a tools page that allows uninstalling missing plugins. Restore can be combined with an automatic Moodle upgrade — the plugin bundles copies of historical core/plugin upgrade scripts for 3.11.x, 4.0.x, 4.1.x and 4.2.x. All sensitive operations require the moodle/site:config capability.
The plugin is well structured and consistently applies Moodle's security primitives. Every entry point gated through index.php runs admin_externalpage_setup('tool_vault_index'), which enforces moodle/site:config; every state-changing action (require_sesskey()), every form (dynamic_form / moodleform), and every external upload validates that the pre-signed URL points to AWS S3 before the encryption key is attached. SQL is parameterised through $DB; HTTP egress goes through the core \curl wrapper; bulk file work uses get_file_storage() and make_backup_temp_directory(). The DDL operations performed during restore — which would be a critical violation in an ordinary plugin — are intrinsic to the plugin's purpose (it is a database restore tool) and are executed only after require_capability('moodle/site:config') and only when the admin has explicitly enabled allowrestore. No exploitable security issues were identified. The findings are limited to small code-quality and compliance gaps: the null_provider privacy declaration is technically inaccurate because the plugin stores the initiating admin's name and email in tool_vault_operation.details; several Mustache templates render log/exception content with {{{...}}} (only reachable as admin-to-admin or with the unguessable progress access key); and a couple of reflection-based hooks into private core methods make the code fragile against core refactors.
Overall assessment
tool_vault is a mature, professionally maintained backup/restore tool that takes the unusual step of dumping the entire Moodle site (DB schema + rows, the dataroot tree, and the filedir content store) and shipping it to lmsvault.io in chunked, optionally passphrase-encrypted ZIP archives. By design it does things that would normally be flagged as critical (DDL on core tables, mass writes into $CFG->dataroot, extensive plugin uninstall/upgrade flows) — but the entire surface is gated behind moodle/site:config and a separately-toggled allowrestore switch.
Security posture
- Authentication/authorisation. The single web entry point
index.phpcallsadmin_externalpage_setup('tool_vault_index', …, 'moodle/site:config'), so every UI action class extendingtool_vault\local\uiactions\baseinherits the admin gate. State-changing actions (backup_startbackup,restore_restore,restore_resume,restore_dryrun,vaulttools,main_forgetapikey, etc.) callrequire_sesskey()explicitly. Forms usedynamic_form/moodleform, both of which validate sesskey automatically. - SSRF / outbound HTTP.
tool_vault\local\helpers\curlextends Moodle's\curland forcesignoresecurity = true, which is necessary to talk to S3 pre-signed URLs that resolve to non-allowlisted hosts. The plugin compensates with an explicit allow-list check (preg_match('|^https://[^/]+\.s3\.amazonaws\.com/|', $s3url)) before any S3 request that carries the SSE-C encryption key. The Vault API host is taken from atool_vault/apiurlconfig that only an admin can set. - SQL. All queries use parameterised
$DBcalls orget_in_or_equal. The few interpolated identifiers (sequence/sequence-name in Postgres,$CFG->prefix,$tablename, integer$restoreid) come from sanitised sources (regex-restricted, type-castint, or admin-controlled config). - Filesystem. Direct writes into
$CFG->dataroothappen only inside__vault_restore__/(a temp scratch space the plugin creates and cleans up) and viaget_file_storage()for actual filedir content. Temp files usemake_backup_temp_directory('tool_vault'). Zip extraction uses core\zip_packer, which has zip-slip protection. - Cryptography. Backup encryption keys are derived from the user passphrase with a single SHA-256 round (
base64_encode(hash('sha256', $passphrase, true))). This is the AWS S3 SSE-C key format expected by the storage backend, not a password storage hash, but admins should be aware that brute-forcing weak passphrases is feasible if an attacker obtains a copy of the encrypted backup.
Findings (all low / info)
- The
privacy\providerdeclaresnull_providerwhile in fact storing the initiating user'suserid,fullnameandemailintool_vault_operation.details. The data is admin-only but the declaration is technically incorrect. format_log_line()wraps log messages inhtml_writer::span($message)without escaping, and theerror_with_backtraceandlogs_with_shorttemplates render those values with triple-brace{{{ … }}}. In practice the messages are admin-controlled or escaped vias(), but exception messages and bundled-backup-controlled identifiers (table/file names from a malicious archive) could theoretically inject HTML into the admin-only progress page.mainaction passesapikey_form_legacy'sreturnurl(typedPARAM_URL, notPARAM_LOCALURL) toredirect(). Open-redirect impact is bounded by themoodle/site:configrequirement.dbops::calculate_row_packet_sizes()andrestoreactions\upgrade::disable_caches()use\ReflectionClass/\ReflectionPropertyto reach private/protected core internals (mysqli_native_moodle_database::emulate_bound_params,cache_factory::$instance). This is fragile across core versions but not a security issue.task\after_upgrade_task::queue()writes directly to the coretask_adhoctable. Justified during upgrade where the standard task API may not yet be available, but worth noting.
Notable strengths
- Defence-in-depth around the encryption-key headers (S3 host allow-list).
- Random 32-character access keys for the unauthenticated
progress.phppage (~190 bits of entropy). - Explicit opt-in (
allowrestore) and aclionlyswitch to lock the plugin to CLI use. - Comprehensive PHPUnit test coverage in
tests/. - Centralised UI action dispatcher with
class_existsvalidation prevents class-name injection from?section=/?action=parameters.
Findings
The plugin's privacy provider implements \core_privacy\local\metadata\null_provider and returns the language string privacy:metadata ("The Vault plugin doesn't store any personal data."). However, the plugin does store local personal data about the admin who initiated each operation: userid, fullname, email (and username for restores) are persisted in the JSON details column of tool_vault_operation, and these values are subsequently displayed on the backup/restore details pages via get_performedby().
Because null_provider is mutually exclusive with storing personal data, the declaration is incorrect and the plugin should:
- Drop
null_provider. - Implement
\core_privacy\local\request\plugin\provider, declaring the relevant fields intool_vault_operation(andtool_vault_logif log lines reference user identifiers). - Implement
core_userlist_providerand the export/delete callbacks.
The data is admin-only and limited in scope, so the practical impact is small, but this is a compliance gap that prevents a true GDPR data-export/delete request from removing the admin's name and email from the Vault history.
Low risk. The data set is small (admin display name + email per backup/restore), only admins can trigger an entry, and the values originate from $USER so the user already knows them. The risk is purely compliance: a future GDPR audit or tool_dataprivacy data-export will under-report the data the plugin holds.
The Privacy API provider is the sole interface a site has for honouring GDPR data-subject requests. By declaring null_provider the plugin asserts that no personal data is stored and short-circuits the export/delete pipeline; if an admin later runs a Privacy data-export for the user who initiated a backup, that user's name/email persisted in tool_vault_operation.details will not appear in the export and delete_data_for_user() will not remove it.
class provider implements \core_privacy\local\metadata\provider, null_provider {
public static function get_reason(): string {
return 'privacy:metadata';
}
public static function get_metadata(collection $collection): collection {
$collection->add_external_location_link(
'lmsvault.io',
['*' => 'privacy:metadata:alldata'],
'privacy:metadata:lmsvault'
);
return $collection;
}
}
Remove null_provider and declare the local user data alongside the existing external-location entry, then implement the request-side providers so the data can be exported/deleted on a privacy request:
class provider implements
\core_privacy\local\metadata\provider,
\core_privacy\local\request\plugin\provider,
\core_privacy\local\request\core_userlist_provider {
public static function get_metadata(collection $collection): collection {
$collection->add_database_table(
'tool_vault_operation',
[
'details' => 'privacy:metadata:tool_vault_operation:details',
// ... fullname/email/userid descriptions ...
],
'privacy:metadata:tool_vault_operation'
);
$collection->add_external_location_link(
'lmsvault.io',
['*' => 'privacy:metadata:alldata'],
'privacy:metadata:lmsvault'
);
return $collection;
}
// ... implement get_contexts_for_userid, export_user_data, delete_data_for_user, etc.
}
$model->set_status(constants::STATUS_SCHEDULED)->set_details([
'usercreated' => $USER->id,
'description' => substr($params['description'] ?? '', 0, constants::DESCRIPTION_MAX_LENGTH),
'bucket' => $params['bucket'] ?? '',
'expiredays' => $params['expiredays'] ?? '',
'encryptionkey' => $encryptionkey,
'encrypted' => (bool)strlen($encryptionkey),
'fullname' => $USER ? fullname($USER) : '',
'email' => $USER->email ?? '',
])->save();
Either declare these fields in the privacy provider, or stop persisting fullname/email and look the user up by userid when needed for display.
$model = new restore_model();
$model
->set_backupkey($backupkey)
->set_details([
'id' => $USER->id ?? '',
'username' => $USER->username ?? '',
'fullname' => $USER ? fullname($USER) : '',
'email' => $USER->email ?? '',
]);
Same as above — these identifiers must be exportable/deletable through the Privacy API.
operation_model::format_log_line() builds the per-line markup with \html_writer::span($message, $class). Moodle's html_writer::span calls html_writer::tag which does not escape the $contents argument (verified in lib/classes/output/html_writer.php). The resulting string is then injected into Mustache templates with triple-brace placeholders — {{{logs}}}, {{{logsshort}}} — so any HTML that ends up in a log row is rendered verbatim.
The same pattern applies to error_with_backtrace.mustache ({{{ error }}} and {{{.}}} for the backtrace), where the source is a captured \Throwable::getMessage() / getTraceAsString() stored in tool_vault_operation.details['error'].
The practical attack surface is narrow:
- Pages that render these values are gated by
moodle/site:configor by knowing the unguessable 32-characteraccesskeyforprogress.php. - Most log messages are static strings or interpolate values that are themselves controlled (file/table names from a controlled archive,
display_size()output, integer counters). - The
prepare_api_exception()/prepare_s3_exception()helpers already wrap remote API/S3 responses withs()before they become exception messages.
The residual risk is an admin-to-admin (or accesskey-holder) XSS path through:
- A maliciously crafted backup whose
__metadata__.json/ DB-dump file/table names contain HTML (these become parts of'failed to insert chunk of records into table $tablename'warnings). - A
\dml_exception/ generic PHP exception whose driver-supplied message contains<characters.
Because the surrounding pages already require the highest privilege available, exploitation gives the attacker nothing they don't already have, but the inconsistency with the Moodle output convention is worth fixing.
Low risk. All affected pages are reachable only by an admin with moodle/site:config or by a holder of a 32-character random accesskey; both populations already have full control over the site, so the XSS does not lead to privilege escalation. The realistic attacker has to control a backup archive (i.e., compromise the upstream Vault account) or trigger a driver-level exception with < in its message. The reason for fixing it is hygiene and defence in depth, not an exploitable bug.
Two markup pipelines feed admin-visible pages. The first is the operation log (tool_vault_log.message) which is appended throughout backup/restore by add_to_log() and rendered by operation_model::get_logs() → format_log_line(). The second is the captured exception (tool_vault_operation.details['error'] / errorbacktrace) which is rendered by error_with_backtrace. Both end up inside Mustache placeholders that disable HTML escaping.
Hypothetical chain (not a working PoC):
- An attacker who controls the contents of an upstream backup archive arranges for a JSON dump file to be named
users.<svg/onload=alert(1)>.0.json. - During restore,
dbops::insert_records()may fail andsite_restore::restore_db()logs"- failed to insert chunk of records into table $tablename: …"where$tablenamecame fromtablename($path)(files_restore.php:299). - The admin or any holder of the operation's
accesskeyopensprogress.php?accesskey=…, the log row is rendered throughformat_log_line()→html_writer::span()→{{{logs}}}and the SVG/script payload runs in their browser.
if ($log->loglevel === constants::LOGLEVEL_UNKNOWN) {
$message = $log->message;
} else {
$message =
"[" . userdate($log->timecreated, $format, 99, false, false) . "] " .
($log->loglevel ? "[{$log->loglevel}] " : '') .
($log->pid ? "[pid {$log->pid}] " : '') .
$log->message;
}
return $usehtml ? \html_writer::span($message, $class) : $message;
Escape the user-controllable portion before wrapping it in the span:
return $usehtml ? \html_writer::span(s($message), $class) : $message;
The templates that consume the result then need to switch from {{{logs}}} / {{{logsshort}}} to {{{logs}}} only where the value is already trusted HTML (the surrounding <span> tags), so an alternative is to escape per-token while keeping the wrapping <span>.
<div class="alert alert-danger alert-block fade in ">
{{{ error }}}
{{#backtrace}}
...
{{/backtrace}}
</div>
{{#backtrace}}
<div class="backtrace" style="display: none">
<pre>{{{.}}}</pre>
</div>
{{/backtrace}}
Switch to double-brace placeholders so Mustache HTML-escapes the values:
<div class="alert alert-danger alert-block fade in ">
{{ error }}
...
</div>
{{#backtrace}}
<div class="backtrace" style="display: none">
<pre>{{ . }}</pre>
</div>
{{/backtrace}}
{{#haslogsshort}}
<div data-vault-purpose="logsshort">
<pre>
{{{logsshort}}}
</pre>
...
If format_log_line() is fixed to escape its message argument (recommended), {{{logsshort}}} / {{{logs}}} remain safe because the only raw HTML left is the wrapping <span class="…">. Otherwise, switch to {{logsshort}} / {{logs}} and emit a separate, escaped span structure from the template.
apikey_form_legacy declares returnurl as PARAM_URL (which only normalises a URL — it does not require a local one), and main::process() redirects to whatever the form returns once an API key is submitted. Combined with main_forgetapikey (which uses PARAM_LOCALURL correctly), the inconsistency means that an admin who is tricked into clicking a crafted Vault sign-in URL with an external returnurl parameter could be bounced to an attacker-controlled host after submitting their key.
Because the entire flow is inside admin_externalpage_setup() it requires moodle/site:config, so the realistic phishing value is low: a victim who can be coerced into clicking the link is already an admin and could have been redirected by other means. Still, the fix is trivial — use PARAM_LOCALURL like the sibling action.
Low risk. Open redirect with admin-only reachability is well below the threshold of an exploitable vulnerability — it cannot be chained to privilege escalation in a meaningful way and is functionally equivalent to any link in any admin email.
Two-step phishing surface: admin first lands on the legacy form (only used when \core_form\dynamic_form isn't available — i.e. older sites), enters their key, and is redirected. The dynamic-form variant (apikey_form) does not have this issue because it doesn't use returnurl.
An attacker prepares a link such as …/admin/tool/vault/index.php?addapikey=1&returnurl=https%3A%2F%2Fevil.example and convinces an admin to enter their API key on the resulting form. After the form posts, main::process() redirects the admin's browser to https://evil.example. Real exploitability is bounded by the requirement that the victim is already a site admin.
protected function definition() {
$mform = $this->_form;
$returnurl = $this->_customdata['returnurl'] ?? '';
$mform->addElement("hidden", "returnurl", $returnurl);
$mform->setType('returnurl', PARAM_URL);
apikey_form_helper::definition($mform);
$this->add_action_buttons();
}
$mform->setType('returnurl', PARAM_LOCALURL);
Match the validation already used by main_forgetapikey::process() (optional_param('returnurl', null, PARAM_LOCALURL)).
if ($formdata = $form->get_data()) {
api::set_api_key($formdata->apikey);
$returnurl = $formdata->returnurl;
redirect($returnurl ?: self::url());
}
Validate $returnurl is local before redirecting, e.g. wrap with new \moodle_url($returnurl) after the form has been switched to PARAM_LOCALURL, or compare against the site wwwroot host before redirect.
The plugin reaches into private/protected core internals via \ReflectionObject / \ReflectionProperty in two places:
dbops::calculate_row_packet_sizes()calls the protectedmysqli_native_moodle_database::emulate_bound_params()to estimate per-row packet size before issuing a multi-row insert.restoreactions\upgrade::disable_caches()andenable_caches()overwrite the private staticcache_factory::$instanceso acache_factory_disabledinstance is used during the simulated upgrade.
Both are deliberate workarounds for missing public APIs and both are documented inline. Neither is a security vulnerability — Moodle has no public way to do these things — but they are fragile: any core refactor that renames or removes the internals (a credible scenario across the 4.x → 5.x → 5.1+ jump) will break the plugin silently.
Recommend tracking via Moodle Tracker so a public extension point eventually replaces the reflection use, and add a defensive try { … } catch (\ReflectionException $e) { … } so a missing internal degrades to a slower-but-correct path rather than a fatal during backup/restore.
Low risk. No security implication; the concern is forward-compatibility. A future Moodle release that removes cache_factory::$instance or emulate_bound_params would silently break the corresponding code path. Adding try/catch and a fallback would make this safe.
Both call sites exist because the database packet-size and CACHE_DISABLE_ALL need to be controlled at runtime in ways the public Moodle API does not expose. The plugin already tolerates reflection failure for missing methods elsewhere (e.g. method_exists check around core\session\manager::destroy_all); the same pattern should apply here.
$reflector = new \ReflectionObject($DB);
$method = $reflector->getMethod('emulate_bound_params');
$method->setAccessible(true);
Wrap the reflection call in a try { … } catch (\ReflectionException $e) { … } and fall back to a conservative chunk size (e.g. bulkinsertsize only) when the internal method is unavailable.
$class = new \ReflectionClass(\cache_factory_disabled::class);
$constructor = $class->getConstructor();
$constructor->setAccessible(true);
$object = $class->newInstanceWithoutConstructor();
$constructor->invoke($object, 1);
$reflection = new \ReflectionProperty(\cache_factory::class, 'instance');
$reflection->setAccessible(true);
$reflection->setValue(null, $object);
Same pattern — wrap in try/catch and fall back to running the upgrade with normal caches if the private static layout changes.
after_upgrade_task::queue() writes directly to the core task_adhoc table instead of going through \core\task\manager::queue_adhoc_task(). The reason is documented in the docblock: at the moment this is called the standard task manager may not yet be available because the upgrade is still in flight.
The pattern is unavoidable given the constraint, but it does mean future schema changes to task_adhoc (e.g. additional NOT NULL columns) would silently break this path. The risk is functional rather than security.
Low risk. Bypassing the standard API is inherent to the plugin's purpose (running upgrade-time work) and the values inserted are entirely plugin-controlled. Fragility risk only.
Used by the bundled upgrade_311/401/402 paths to re-queue ad-hoc tasks that the historical core upgrade scripts depended on (e.g. mod_forum\task\refresh_forum_post_counts).
protected static function queue(string $classname, bool $checkforexisting, $customdata) {
global $DB;
$parts = explode('\\', $classname);
$record = (object)[
'classname' => $classname,
'component' => $parts[0],
'nextruntime' => time() - 1,
'timecreated' => time(),
'customdata' => $customdata !== null ? json_encode($customdata) : null,
];
if ($checkforexisting) {
$existing = $DB->get_record('task_adhoc', ['classname' => $classname, 'component' => $record->component,
'customdata' => $record->customdata]);
if ($existing) {
return;
}
}
$DB->insert_record('task_adhoc', $record);
}
Try \core\task\manager::queue_adhoc_task($task, $checkforexisting) first and fall back to the direct insert only if the manager call throws. That way the moment core makes the manager safe to call mid-upgrade, the plugin automatically uses the supported API:
try {
$task = new $classname();
if ($customdata !== null) { $task->set_custom_data($customdata); }
\core\task\manager::queue_adhoc_task($task, $checkforexisting);
return;
} catch (\Throwable $e) {
// Fall through to the direct-insert workaround.
}
site_restore::restore_db() calls $DB->change_database_structure() and $DB->get_manager()->drop_table() against arbitrary core/plugin tables to reshape the local schema to match the contents of the restored backup. By Moodle's general rules, any plugin doing DDL outside db/upgrade.php would be flagged as a critical violation — this is noted explicitly so reviewers don't miss it, but this plugin is a database restore tool and the DDL is its purpose.
Mitigations the plugin already applies:
- The schema changes are only executed inside
site_restore::execute()which is reachable only viasafe_start_and_execute()from the cron task or CLI. site_restore::schedule()andsite_restore::start()both checkapi::are_restores_allowed()(thetool_vault/allowrestoresetting, off by default).- The site is placed in maintenance mode for the duration of the restore via
tool_vault_after_config()/hook_callbacks::after_config(). require_capability('moodle/site:config')is enforced for every UI entry point.
This finding is recorded as info so it is visible during review but does not count as a true rule violation.
Informational. The pattern is unavoidable for this plugin and is gated by capability + opt-in setting + maintenance mode. No exploit path exists below the moodle/site:config capability.
Restores rebuild the database from the backup's __structure__.xml and JSON dump, which inherently requires altering or recreating tables to match the backed-up shape (especially when restoring across major Moodle versions).
if ($altersql = $table->get_alter_sql($originaltable)) {
try {
$DB->change_database_structure($altersql);
if ($originaltable) {
$this->add_to_log('- table ' . $tablename . ' structure is modified');
}
} catch (\Throwable $t) {
$this->add_to_log('- table ' . $tablename . ' structure is modified, failed to apply modifications: ' .
$t->getMessage(), constants::LOGLEVEL_WARNING);
}
}
foreach ($extratables as $table) {
$DB->get_manager()->drop_table($table->get_xmldb_table());
}
log_capture::force_debug() enables $CFG->debug = E_ALL, $CFG->debugdisplay = 1, $CFG->debugdeveloper = true for the duration of every backup/restore (gated by the tool_vault/forcedebug setting which is on by default). Any captured PHP notices/warnings, plus the full stack trace stored in tool_vault_operation.details['errorbacktrace'], are visible on progress.php to anyone holding the operation's 32-character accesskey.
The access key has ~190 bits of entropy and is not exposed in any predictable way, so the realistic exposure is small, but the leaked information (file paths, library versions, occasionally row contents) is still administrative-level diagnostic data. Admins who do not want any leakage at all can set forcedebug to 0 in the plugin settings.
Informational. The accesskey gating means an attacker would need both the access key and a triggering condition that produces sensitive debug output. No realistic exploit chain.
Debug capture is intentional — the plugin's support model relies on rich logs for backup/restore failures. The settings UI exposes the toggle (tool_vault/forcedebug) and the README explains the rationale.
public static function force_debug() {
global $CFG;
if (!self::is_capturing() || !self::$model) {
return;
}
if (!api::get_setting_checkbox('forcedebug')) {
return;
}
self::$olddebug = [
'debug' => $CFG->debug ?? null,
'debugdisplay' => $CFG->debugdisplay ?? null,
'debugdeveloper' => $CFG->debugdeveloper ?? false,
];
if (version_compare(phpversion(), '8.4', '<')) {
$CFG->debug = (E_ALL | E_STRICT);
} else {
$CFG->debug = (E_ALL);
}
$CFG->debugdisplay = 1;
$CFG->debugdeveloper = true;
Consider documenting the trade-off in README.md so site owners with strict information-disclosure policies know to disable forcedebug on production. Alternatively, when the forced debug flag is on but progress.php is the request, redact the backtrace before rendering.
The plugin bundles copies of historical Moodle core upgrade code under classes/local/restoreactions/upgrade_311/, upgrade_401/, upgrade_402/. Each function is renamed (tool_vault_311_xmldb_*_upgrade etc.) to avoid colliding with core, and each file carries the original Moodle GPL header. This is not third-party code in the licensing sense (it is GPL-compatible Moodle code with renamed entry points), so a thirdpartylibs.xml entry is not required, but it is worth noting that ~13K lines of the plugin are vendored core logic that will need maintenance whenever core fixes a bug in those historical upgrade steps.
Backup-passphrase encryption uses a single SHA-256 round (api::prepare_encryption_key()): base64_encode(hash('sha256', $passphrase, true)). This format is dictated by AWS S3 SSE-C, so adding a KDF would break the contract with the storage backend, but admins should be aware that an attacker who later obtains an encrypted backup can brute-force weak passphrases offline. This is not a finding — just a property to document for users.
The unauthenticated progress.php page relies on a 32-character random accesskey (random_string(32), ~190 bits) stored against a unique DB index. This is sufficient entropy that brute-force is infeasible, and the access pattern (admin shares the URL with their team to monitor a long-running backup) is intentional. Audit-wise this is acceptable, but the page should be documented in the plugin README so admins understand that anyone with the URL can read the operation logs.
hook_callbacks::after_config() and the legacy tool_vault_after_config() callback both put the site into maintenance mode while a backup or restore is in progress (api::is_maintenance_mode()), which prevents anything except progress.php from running. This is correct and important — it stops users from making writes while the database is being restored — but it also means a wedged operation that fails to clear STATUS_INPROGRESS will lock the entire site until the cron task runs fail_all_stuck_operations(). The fail_if_stuck() logic and the 30-minute LOCK_TIMEOUT provide a self-recovery path.