I'm back with another helpful Python script, this time designed for a critical task: recovering data from deleted custom fields in Jira Cloud. Whether you’re performing a compliance audit or need to restore critical information that was accidentally removed, this script will help you find the last known value of a custom field before it was deleted.
(Note: It will also work if the fields were not deleted)
This script leverages the Jira Cloud REST API's changelog endpoint (this endpoint specifically) to retrieve the full history of changes for one or more issues. It efficiently handles pagination to ensure all data is retrieved, ensuring no changes are missed. The script then intelligently parses the changelog to find the last two changes of a specified custom field before it was deleted or changed. The retrieved data is then saved into a CSV file with the following columns:
The script will prompt you to enter your Jira Cloud URL, email, API token, the list of issue keys, and the custom field's name or ID you're interested in.
1) Make sure that the user acting on this script has the necessary project permissions to "Browse Issues"
2) Install the necessary Python libraries by running:
pip install requests
3) Prepare your Jira Cloud site URL (your-domain.atlassian.net), email address, and API token
4) Prepare your Issue Keys in the expected format to feed the script, for example:
PRO-212, PRO-45, DEV-101, DEV-202, MKT-303, MKT-404, HR-505, HR-606, FIN-707, FIN-808, IT-909, IT-111
import requests from requests.auth import HTTPBasicAuth from getpass import getpass import logging import os import csv import json logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') def get_jira_auth(): jira_domain = input("Enter your Jira Cloud domain (e.g., your-domain.atlassian.net): ") email = input("Enter your Jira email: ") api_token = getpass("Enter your Jira API token: ") return jira_domain, HTTPBasicAuth(email, api_token) def get_issue_changelog(jira_domain, auth, issue_key): base_url = f"https://{jira_domain}/rest/api/3/issue/{issue_key}/changelog" all_changelog_entries = [] start_at = 0 max_results = 100 while True: url = f"{base_url}?startAt={start_at}&maxResults={max_results}" try: response = requests.get(url, auth=auth) response.raise_for_status() changelog_data = response.json() all_changelog_entries.extend(changelog_data.get('values', [])) if len(changelog_data.get('values', [])) < max_results or start_at + max_results >= changelog_data.get('total', 0): break start_at += max_results except requests.exceptions.RequestException as e: logging.error(f"Error retrieving changelog for issue {issue_key}: {e}") if response is not None and response.status_code == 401: logging.error("Authentication failed. Please verify your email and API token.") elif response is not None and response.status_code == 404: logging.error(f"Issue with key '{issue_key}' not found.") return None return all_changelog_entries def find_last_two_changes(changelog_entries, field_identifier): found_changes = [] for entry in reversed(changelog_entries): entry_date = entry.get('created') for item in entry.get('items', []): field_name = item.get('field') field_id = item.get('fieldId') # Match by custom field name or ID if field_name == field_identifier or field_id == field_identifier: # Set value to None if it's null or an empty string from_string = item.get('fromString') if from_string is None or from_string == "": from_string = None to_string = item.get('toString') if to_string is None or to_string == "": to_string = None found_changes.append({ "from_value": from_string, "to_value": to_string, "date": entry_date }) break if len(found_changes) >= 2: break return found_changes[::-1] def save_to_csv(data, filename="jira_deleted_field_data.csv"): if not data: logging.warning("No data to save to CSV.") return fieldnames = [ 'Issue Key', 'Field Identifier', 'Previous Value (most recent update)', 'Last Value (most recent update)', 'Date of Change (most recent update)', 'Previous Value (before the most recent update)', 'Last Value (before the most recent update)', 'Date of Change (before the most recent update)' ] try: with open(filename, 'w', newline='', encoding='utf-8') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() writer.writerows(data) logging.info("Crafting CSV file with the retrieved data") logging.info(f"Operation completed, CSV file saved at: {os.path.abspath(filename)}") except Exception as e: logging.error(f"Error saving data to CSV file: {e}") def main(): jira_domain, auth = get_jira_auth() issue_keys_input = input("Enter the Jira issue key(s) separated by comma (e.g., PROJ-123,PROJ-456): ").strip() issue_keys = [key.strip() for key in issue_keys_input.split(',')] field_identifier = input("Enter the custom field name or ID (e.g., 'customfield_10001' or 'My Custom Field'): ").strip() recovered_data = [] for issue_key in issue_keys: logging.info(f"Retrieving changelog for issue {issue_key}") changelog = get_issue_changelog(jira_domain, auth, issue_key) if changelog: changes = find_last_two_changes(changelog, field_identifier) if changes: issue_data = { 'Issue Key': issue_key, 'Field Identifier': field_identifier, 'Previous Value (most recent update)': 'N/A', 'Last Value (most recent update)': 'N/A', 'Date of Change (most recent update)': 'N/A', 'Previous Value (before the most recent update)': 'N/A', 'Last Value (before the most recent update)': 'N/A', 'Date of Change (before the most recent update)': 'N/A' } # Handle the most recent change most_recent_change = changes[-1] issue_data['Previous Value (most recent update)'] = most_recent_change['from_value'] issue_data['Last Value (most recent update)'] = most_recent_change['to_value'] issue_data['Date of Change (most recent update)'] = most_recent_change['date'] # Handle the second to last change if it exists if len(changes) > 1: second_last_change = changes[-2] issue_data['Previous Value (before the most recent update)'] = second_last_change['from_value'] issue_data['Last Value (before the most recent update)'] = second_last_change['to_value'] issue_data['Date of Change (before the most recent update)'] = second_last_change['date'] recovered_data.append(issue_data) logging.info(f"Successfully found data for issue {issue_key}. {len(changes)} change(s) were recorded.") else: logging.warning(f"Could not find any previous value for field '{field_identifier}' on issue {issue_key}") total_recovered = len(recovered_data) logging.info(f"Operation completed. A total of {total_recovered} entries were found for the specified field.") if recovered_data: save_to_csv(recovered_data) else: logging.info("No data was recovered. CSV file will not be created.") if __name__ == "__main__": main()
Output example:
Disclaimer:
While this script is designed to facilitate certain interactions with JIRA Software Cloud as a convenience, it is essential to understand that its functionality is subject to change due to updates to JIRA Software Cloud’s API or other conditions that could affect its operation.
Please note that this script is provided on an "as is" and "as available" basis without any warranties of any kind. This script is not officially supported or endorsed by Atlassian, and its use is at your own discretion and risk.
Cheers!
Delfino Rosales
Senior Cloud Support Engineer
Amsterdam, NL
1 comment