Merge branch 'dev' into bugfix/api-perms-request

This commit is contained in:
Zedifus 2024-04-07 00:06:15 +01:00
commit 20a8366410
28 changed files with 648 additions and 385 deletions

View File

@ -3,16 +3,16 @@
- **Install Type:** Git Cloned(Manual) / Installer / WinPackage / Docker
## What Happened?
*A brief description of what happened when you tried to perform an action*
<!-- A brief description of what happened when you tried to perform an action -->
## Expected result
*What should have happened when you performed the actions*
<!-- What should have happened when you performed the actions -->
## Steps to reproduce
*List the steps required to produce the error. These should be as few as possible*
<!-- List the steps required to produce the error. These should be as few as possible -->
## Screenshots
Any relevant screenshots which show the issue* !-->*
<!-- Any relevant screenshots which show the issue -->
## Priority/Severity
- [ ] High (anything that impacts the normal user flow or blocks app usage)

View File

@ -1,13 +1,14 @@
## Summary
*Outline the issue being faced, and why this needs to change*
<!-- Outline the issue being faced, and why this needs to change -->
## Area of the system
*This might only be one part, but may involve multiple sections, Login/Dashboad/Terminal/Config*
<!-- This might only be one part, but may involve multiple sections, Login/Dashboad/Terminal/Config -->
## How does this currently work?
<!-- A brief description of how the functionality currently operates -->
## What is the desired way of working?
*After the change, what should the process/operation be?*
<!-- After the change, what should the process/operation be? -->
## Priority/Severity
- [ ] High (This will bring a huge increase in performance/productivity/usability)

View File

@ -1,8 +1,8 @@
## Problem Statement
*What is the issue being faced and needs addressing?*
<!-- What is the issue being faced and needs addressing? -->
## Who will benefit?
*Will this fix a problem that only one user has, or will it benefit a lot of people*
<!-- Will this fix a problem that only one user has, or will it benefit a lot of people -->
## Benefits and risks
What benefits does this bring?
@ -16,10 +16,10 @@
## Proposed solution
*How would you like to see this issue resolved?*
<!-- How would you like to see this issue resolved? -->
## Examples
*Are there any examples of this which exist in other software?*
<!-- Are there any examples of this which exist in other software? -->
## Priority/Severity
- [ ] High (This will bring a huge increase in performance/productivity/usability)

View File

@ -1,22 +1,22 @@
## What does this MR do and why?
___Describe in detail what your merge request does and why.___<br>
> *Please keep this description updated with any discussion that takes place so*<br>
*that reviewers can understand your intent. Keeping the description updated is*<br>
*especially important if they didn't participate in the discussion.*<br>
<!-- Describe in detail what your merge request does and why. -->
<!-- Please keep this description updated with any discussion that takes place so -->
<!-- that reviewers can understand your intent. Keeping the description updated is -->
<!-- especially important if they didn't participate in the discussion. -->
## Screenshots or screen recordings
___These are strongly recommended to assist reviewers and reduce the time to merge your change.___<br>
> *Please include any relevant screenshots or screen recordings that will assist*<br>
*reviewers and future readers. If you need help visually verifying the change,*<br>
*please leave a comment and ping a GitLab reviewer, maintainer, or MR coach.*<br>
<!-- These are strongly recommended to assist reviewers and reduce the time to merge your change. -->
<!-- Please include any relevant screenshots or screen recordings that will assist, -->
<!-- reviewers and future readers. If you need help visually verifying the change, -->
<!-- please leave a comment and ping a GitLab reviewer, maintainer, or MR coach. -->
## How to set up and validate locally
___Numbered steps to set up and validate the change are strongly suggested.___
<!-- Numbered steps to set up and validate the change are strongly suggested. -->
## MR acceptance checklist

View File

@ -3,13 +3,46 @@
# Prompt the user for the directory path
read -p "Enter the directory path to set permissions (/var/opt/minecraft/crafty): " directory_path
# Count the total number of directories
total_dirs=$(find "$directory_path" -type d 2>/dev/null | wc -l)
# Count the total number of files
total_files=$(find "$directory_path" -type f 2>/dev/null | wc -l)
# Initialize a counter for directories and files
dir_count=0
file_count=0
# Function to print progress
print_progress() {
echo -ne "\rDirectories: $dir_count/$total_dirs Files: $file_count/$total_files"
}
# Check if the script is running within a Docker container
if [ -f "/.dockerenv" ]; then
echo "Script is running within a Docker container. Exiting with error."
exit 1 # Exit with an error code if running in Docker
else
echo "Script is not running within a Docker container. Executing permissions changes..."
# Run the commands to set permissions
sudo chmod 700 $(find "$directory_path" -type d)
sudo chmod 644 $(find "$directory_path" -type f)
# Run the commands to set permissions for directories
echo "Changing permissions for directories:"
for dir in $(find "$directory_path" -type d 2>/dev/null); do
if [ -e "$dir" ]; then
sudo chmod 700 "$dir" && ((dir_count++))
fi
print_progress
done
# Run the commands to set permissions for files
echo -e "\nChanging permissions for files:"
for file in $(find "$directory_path" -type f 2>/dev/null); do
if [ -e "$file" ]; then
sudo chmod 644 "$file" && ((file_count++))
fi
print_progress
done
echo "You will now need to execute a chmod +x on all bedrock executables"
fi
echo "" # Adding a new line after the loop for better readability

View File

@ -2,8 +2,14 @@
## --- [4.3.2] - 2024/TBD
### New features
TBD
### Refactor
- Refactor ServerJars caching and move to api.serverjars.com ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/744) | [Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/746))
### Bug fixes
TBD
- Fix migrator issue when jumping versions ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/734))
- Fix backend issue causing error when restoring backups in 4.3.x ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/736))
- Fix backend issue causing error when cloning servers in 4.3.x ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/741))
- Bump orjson for CVE-2024-27454 ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/747))
- Fix calling of orjson JSONDecodeError class ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/747))
### Tweaks
- Clean up remaining http handler references ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/733))
- Remove version disclosure on login page ([Merge Request](https://gitlab.com/crafty-controller/crafty-4/-/merge_requests/737))

View File

@ -47,7 +47,7 @@ class ServerPermsController:
new_server_id,
role.role_id,
PermissionsServers.get_permissions_mask(
int(role.role_id), int(old_server_id)
int(role.role_id), old_server_id
),
)
# Permissions_Servers.add_role_server(

View File

@ -12,13 +12,15 @@ from app.classes.shared.file_helpers import FileHelpers
from app.classes.shared.websocket_manager import WebSocketManager
logger = logging.getLogger(__name__)
# Temp type var until sjars restores generic fetchTypes0
SERVERJARS_TYPES = ["modded", "proxies", "servers", "vanilla"]
PAPERJARS = ["paper", "folia"]
class ServerJars:
def __init__(self, helper):
self.helper = helper
self.base_url = "https://serverjars.com"
self.base_url = "https://api.serverjars.com"
self.paper_base = "https://api.papermc.io"
@staticmethod
@ -82,6 +84,183 @@ class ServerJars:
builds = api_data.get("builds", [])
return builds[-1] if builds else None
def _read_cache(self):
cache_file = self.helper.serverjar_cache
cache = {}
try:
with open(cache_file, "r", encoding="utf-8") as f:
cache = json.load(f)
except Exception as e:
logger.error(f"Unable to read serverjars.com cache file: {e}")
return cache
def get_serverjar_data(self):
data = self._read_cache()
return data.get("types")
def _check_sjars_api_alive(self):
logger.info("Checking serverjars.com API status")
check_url = f"{self.base_url}"
try:
response = requests.get(check_url, timeout=2)
response_json = response.json()
if (
response.status_code in [200, 201]
and response_json.get("status") == "success"
and response_json.get("response", {}).get("status") == "ok"
):
logger.info("Serverjars.com API is alive and responding as expected")
return True
except Exception as e:
logger.error(f"Unable to connect to serverjar.com API due to error: {e}")
return False
logger.error(
"Serverjars.com API is not responding as expected or unable to contact"
)
return False
def _fetch_projects_for_type(self, server_type):
"""
Fetches projects for a given server type from the ServerJars API.
"""
try:
response = requests.get(
f"{self.base_url}/api/fetchTypes/{server_type}", timeout=5
)
response.raise_for_status() # Ensure HTTP errors are caught
data = response.json()
if data.get("status") == "success":
return data["response"].get("servers", [])
except requests.RequestException as e:
print(f"Error fetching projects for type {server_type}: {e}")
return []
def _get_server_type_list(self):
"""
Builds the type structure with projects fetched for each type.
"""
type_structure = {}
for server_type in SERVERJARS_TYPES:
projects = self._fetch_projects_for_type(server_type)
type_structure[server_type] = {project: [] for project in projects}
return type_structure
def _get_jar_versions(self, server_type, project_name, max_ver=50):
"""
Grabs available versions for specified project
Args:
server_type (str): Server Type Category (modded, servers, etc)
project_name (str): Target project (paper, forge, magma, etc)
max (int, optional): Max versions returned. Defaults to 50.
Returns:
list: An array of versions
"""
url = f"{self.base_url}/api/fetchAll/{server_type}/{project_name}?max={max_ver}"
try:
response = requests.get(url, timeout=5)
response.raise_for_status() # Ensure HTTP errors are caught
data = response.json()
logger.debug(f"Received data for {server_type}/{project_name}: {data}")
if data.get("status") == "success":
versions = [
item.get("version")
for item in data.get("response", [])
if "version" in item
]
versions.reverse() # Reverse so versions are newest -> oldest
logger.debug(f"Versions extracted: {versions}")
return versions
except requests.RequestException as e:
logger.error(
f"Error fetching jar versions for {server_type}/{project_name}: {e}"
)
return []
def _refresh_cache(self):
"""
Contains the shared logic for refreshing the cache.
This method is called by both manual_refresh_cache and refresh_cache methods.
"""
now = datetime.now()
cache_data = {
"last_refreshed": now.strftime("%m/%d/%Y, %H:%M:%S"),
"types": self._get_server_type_list(),
}
for server_type, projects in cache_data["types"].items():
for project_name in projects:
versions = self._get_jar_versions(server_type, project_name)
cache_data["types"][server_type][project_name] = versions
for paper_project in PAPERJARS:
cache_data["types"]["servers"][paper_project] = self.get_paper_versions(
paper_project
)
return cache_data
def manual_refresh_cache(self):
"""
Manually triggers the cache refresh process.
"""
if not self._check_sjars_api_alive():
logger.error("ServerJars API is not available.")
return False
logger.info("Manual cache refresh requested.")
cache_data = self._refresh_cache()
# Save the updated cache data
try:
with open(self.helper.serverjar_cache, "w", encoding="utf-8") as cache_file:
json.dump(cache_data, cache_file, indent=4)
logger.info("Cache file successfully refreshed manually.")
except Exception as e:
logger.error(f"Failed to update cache file manually: {e}")
def refresh_cache(self):
"""
Automatically trigger cache refresh process based age.
This method checks if the cache file is older than a specified number of days
before deciding to refresh.
"""
cache_file_path = self.helper.serverjar_cache
# Determine if the cache is old and needs refreshing
cache_old = self.helper.is_file_older_than_x_days(cache_file_path)
# debug override
# cache_old = True
if not self._check_sjars_api_alive():
logger.error("ServerJars API is not available.")
return False
if not cache_old:
logger.info("Cache file is not old enough to require automatic refresh.")
return False
logger.info("Automatic cache refresh initiated due to old cache.")
cache_data = self._refresh_cache()
# Save the updated cache data
try:
with open(cache_file_path, "w", encoding="utf-8") as cache_file:
json.dump(cache_data, cache_file, indent=4)
logger.info("Cache file successfully refreshed automatically.")
except Exception as e:
logger.error(f"Failed to update cache file automatically: {e}")
def get_fetch_url(self, jar, server, version):
"""
Constructs the URL for downloading a server JAR file based on the server type.
@ -132,151 +311,6 @@ class ServerJars:
logger.error(f"An error occurred while constructing fetch URL: {e}")
return None
def _get_api_result(self, call_url: str):
full_url = f"{self.base_url}{call_url}"
try:
response = requests.get(full_url, timeout=2)
response.raise_for_status()
api_data = json.loads(response.content)
except Exception as e:
logger.error(f"Unable to load {full_url} api due to error: {e}")
return {}
api_result = api_data.get("status")
api_response = api_data.get("response", {})
if api_result != "success":
logger.error(f"Api returned a failed status: {api_result}")
return {}
return api_response
def _read_cache(self):
cache_file = self.helper.serverjar_cache
cache = {}
try:
with open(cache_file, "r", encoding="utf-8") as f:
cache = json.load(f)
except Exception as e:
logger.error(f"Unable to read serverjars.com cache file: {e}")
return cache
def get_serverjar_data(self):
data = self._read_cache()
return data.get("types")
def _check_api_alive(self):
logger.info("Checking serverjars.com API status")
check_url = f"{self.base_url}/api/fetchTypes"
try:
response = requests.get(check_url, timeout=2)
if response.status_code in [200, 201]:
logger.info("Serverjars.com API is alive")
return True
except Exception as e:
logger.error(f"Unable to connect to serverjar.com api due to error: {e}")
return {}
logger.error("unable to contact serverjars.com api")
return False
def manual_refresh_cache(self):
cache_file = self.helper.serverjar_cache
# debug override
# cache_old = True
# if the API is down... we bomb out
if not self._check_api_alive():
return False
logger.info("Manual Refresh requested.")
now = datetime.now()
data = {
"last_refreshed": now.strftime("%m/%d/%Y, %H:%M:%S"),
"types": {},
}
jar_types = self._get_server_type_list()
data["types"].update(jar_types)
for s in data["types"]:
data["types"].update({s: dict.fromkeys(data["types"].get(s), {})})
for j in data["types"].get(s):
versions = self._get_jar_details(j, s)
data["types"][s].update({j: versions})
for item in PAPERJARS:
data["types"]["servers"][item] = self.get_paper_versions(item)
# save our cache
try:
with open(cache_file, "w", encoding="utf-8") as f:
f.write(json.dumps(data, indent=4))
logger.info("Cache file refreshed")
except Exception as e:
logger.error(f"Unable to update serverjars.com cache file: {e}")
def refresh_cache(self):
cache_file = self.helper.serverjar_cache
cache_old = self.helper.is_file_older_than_x_days(cache_file)
# debug override
# cache_old = True
# if the API is down... we bomb out
if not self._check_api_alive():
return False
logger.info("Checking Cache file age")
# if file is older than 1 day
if cache_old:
logger.info("Cache file is over 1 day old, refreshing")
now = datetime.now()
data = {
"last_refreshed": now.strftime("%m/%d/%Y, %H:%M:%S"),
"types": {},
}
jar_types = self._get_server_type_list()
data["types"].update(jar_types)
for s in data["types"]:
data["types"].update({s: dict.fromkeys(data["types"].get(s), {})})
for j in data["types"].get(s):
versions = self._get_jar_details(j, s)
data["types"][s].update({j: versions})
for item in PAPERJARS:
data["types"]["servers"][item] = self.get_paper_versions(item)
# save our cache
try:
with open(cache_file, "w", encoding="utf-8") as f:
f.write(json.dumps(data, indent=4))
logger.info("Cache file refreshed")
except Exception as e:
logger.error(f"Unable to update serverjars.com cache file: {e}")
def _get_jar_details(self, server_type, jar_type="servers"):
url = f"/api/fetchAll/{jar_type}/{server_type}"
response = self._get_api_result(url)
temp = []
for v in response:
temp.append(v.get("version"))
time.sleep(0.5)
return temp
def _get_server_type_list(self):
url = "/api/fetchTypes/"
response = self._get_api_result(url)
if "bedrock" in response.keys():
# remove pocketmine from options
del response["bedrock"]
return response
def download_jar(self, jar, server, version, path, server_id):
update_thread = threading.Thread(
name=f"server_download-{server_id}-{server}-{version}",

View File

@ -575,7 +575,7 @@ class Controller:
):
server_obj = self.servers.get_server_obj(new_server_id)
url = (
"https://serverjars.com/api/fetchJar/"
"https://api.serverjars.com/api/fetchJar/"
f"{create_data['category']}"
f"/{create_data['type']}/{create_data['version']}"
)
@ -1131,7 +1131,7 @@ class Controller:
server_obj.path = new_local_server_path
failed = False
for s in self.servers.failed_servers:
if int(s["server_id"]) == int(server.get("server_id")):
if s["server_id"] == server.get("server_id"):
failed = True
if not failed:
self.servers.update_server(server_obj)

View File

@ -372,11 +372,11 @@ class MigrationManager(object):
Create migrator
"""
migrator = Migrator(self.database)
# Removing the up_one to prevent running all
# migrations each time we got a new one.
# It's handled by migration.up() function.
# for name in self.done:
# self.up_one(name, migrator, True)
# Running false migrations to retrives the schemes of
# the precedents created tables in the table_dict element
# It's useful to run the new migrations
for name in self.done:
self.up_one(name, migrator, True)
return migrator
def compile(self, name, migrate="", rollback=""):

View File

@ -1403,7 +1403,7 @@ class PanelHandler(BaseHandler):
self.controller.management.add_to_audit_log(
exec_user["user_id"],
f"Removed user {target_user['username']} (UID:{user_id})",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)
self.redirect("/panel/panel_config")

View File

@ -228,7 +228,7 @@ class PublicHandler(BaseHandler):
)
# log this login
self.controller.management.add_to_audit_log(
user_data.user_id, "Logged in", 0, self.get_remote_ip()
user_data.user_id, "Logged in", None, self.get_remote_ip()
)
return self.finish_json(
@ -254,7 +254,7 @@ class PublicHandler(BaseHandler):
)
# log this failed login attempt
self.controller.management.add_to_audit_log(
user_data.user_id, "Tried to log in", 0, self.get_remote_ip()
user_data.user_id, "Tried to log in", None, self.get_remote_ip()
)
return self.finish_json(
403,

View File

@ -101,7 +101,7 @@ class ApiAuthLoginHandler(BaseApiHandler):
# log this login
self.controller.management.add_to_audit_log(
user_data.user_id, "logged in via the API", 0, self.get_remote_ip()
user_data.user_id, "logged in via the API", None, self.get_remote_ip()
)
self.finish_json(
@ -119,7 +119,7 @@ class ApiAuthLoginHandler(BaseApiHandler):
else:
# log this failed login attempt
self.controller.management.add_to_audit_log(
user_data.user_id, "Tried to log in", 0, self.get_remote_ip()
user_data.user_id, "Tried to log in", None, self.get_remote_ip()
)
self.finish_json(
401,

View File

@ -106,7 +106,7 @@ class ApiCraftyConfigIndexHandler(BaseApiHandler):
try:
data = orjson.loads(self.request.body)
except orjson.decoder.JSONDecodeError as e:
except orjson.JSONDecodeError as e:
return self.finish_json(
400, {"status": "error", "error": "INVALID_JSON", "error_data": str(e)}
)
@ -128,7 +128,7 @@ class ApiCraftyConfigIndexHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
user["user_id"],
"edited config.json",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)
@ -187,7 +187,7 @@ class ApiCraftyCustomizeIndexHandler(BaseApiHandler):
try:
data = orjson.loads(self.request.body)
except orjson.decoder.JSONDecodeError as e:
except orjson.JSONDecodeError as e:
return self.finish_json(
400, {"status": "error", "error": "INVALID_JSON", "error_data": str(e)}
)
@ -225,7 +225,7 @@ class ApiCraftyCustomizeIndexHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
user["user_id"],
f"customized login photo: {data['photo']}/{data['opacity']}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)
self.controller.management.set_login_opacity(int(data["opacity"]))

View File

@ -68,7 +68,7 @@ class ApiCraftyConfigServerDirHandler(BaseApiHandler):
try:
data = orjson.loads(self.request.body)
except orjson.decoder.JSONDecodeError as e:
except orjson.JSONDecodeError as e:
return self.finish_json(
400, {"status": "error", "error": "INVALID_JSON", "error_data": str(e)}
)
@ -109,7 +109,7 @@ class ApiCraftyConfigServerDirHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
auth_data[4]["user_id"],
f"updated master servers dir to {new_dir}/servers",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)

View File

@ -161,7 +161,7 @@ class ApiRolesIndexHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
user["user_id"],
f"created role {role_name} (RID:{role_id})",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)

View File

@ -112,7 +112,7 @@ class ApiRolesRoleIndexHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
user["user_id"],
f"deleted role with ID {role_id}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)
@ -133,7 +133,7 @@ class ApiRolesRoleIndexHandler(BaseApiHandler):
try:
data = orjson.loads(self.request.body)
except orjson.decoder.JSONDecodeError as e:
except orjson.JSONDecodeError as e:
return self.finish_json(
400, {"status": "error", "error": "INVALID_JSON", "error_data": str(e)}
)
@ -172,7 +172,7 @@ class ApiRolesRoleIndexHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
user["user_id"],
f"modified role with ID {role_id}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)

View File

@ -33,6 +33,17 @@ class ApiServersServerActionHandler(BaseApiHandler):
self.controller.crafty_perms.can_create_server(auth_data[4]["user_id"])
or auth_data[4]["superuser"]
):
srv_object = self.controller.servers.get_server_instance_by_id(
server_id
)
if srv_object.check_running():
return self.finish_json(
409,
{
"status": "error",
"error": "Server Running!",
},
)
self._clone_server(server_id, auth_data[4]["user_id"])
return self.finish_json(200, {"status": "ok"})
return self.finish_json(
@ -67,20 +78,29 @@ class ApiServersServerActionHandler(BaseApiHandler):
name_counter += 1
new_server_name = server_data.get("server_name") + f" (Copy {name_counter})"
new_server_id = self.controller.servers.create_server(
new_server_name,
None,
"",
None,
server_data.get("executable"),
None,
server_data.get("stop_command"),
server_data.get("type"),
user_id,
server_data.get("server_port"),
new_server_id = self.helper.create_uuid()
new_server_path = os.path.join(self.helper.servers_dir, new_server_id)
new_backup_path = os.path.join(self.helper.backup_path, new_server_id)
new_server_command = str(server_data.get("execution_command")).replace(
server_id, new_server_id
)
new_server_log_path = server_data.get("log_path").replace(
server_id, new_server_id
)
new_server_path = os.path.join(self.helper.servers_dir, new_server_id)
self.controller.register_server(
new_server_name,
new_server_id,
new_server_path,
new_backup_path,
new_server_command,
server_data.get("executable"),
new_server_log_path,
server_data.get("stop_command"),
server_data.get("server_port"),
user_id,
server_data.get("type"),
)
self.controller.management.add_to_audit_log(
user_id,
@ -92,18 +112,6 @@ class ApiServersServerActionHandler(BaseApiHandler):
# copy the old server
FileHelpers.copy_dir(server_data.get("path"), new_server_path)
# TODO get old server DB data to individual variables
new_server_command = str(server_data.get("execution_command"))
new_server_log_file = str(
self.helper.get_os_understandable_path(server_data.get("log_path"))
)
server: Servers = self.controller.servers.get_server_obj(new_server_id)
server.path = new_server_path
server.log_path = new_server_log_file
server.execution_command = new_server_command
self.controller.servers.update_server(server)
for role in self.controller.server_perms.get_server_roles(server_id):
mask = self.controller.server_perms.get_permissions_mask(
role.role_id, server_id

View File

@ -203,7 +203,7 @@ class ApiServersServerBackupsBackupIndexHandler(BaseApiHandler):
except JobLookupError as e:
logger.info("No active tasks found for server: {e}")
self.controller.remove_server(server_id, True)
except Exception as e:
except (FileNotFoundError, NotADirectoryError) as e:
return self.finish_json(
400, {"status": "error", "error": f"NO BACKUP FOUND {e}"}
)

View File

@ -177,7 +177,7 @@ class ApiUsersIndexHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
user["user_id"],
f"added user {username} (UID:{user_id}) with roles {roles}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)

View File

@ -43,7 +43,7 @@ class ApiUsersUserKeyHandler(BaseApiHandler):
auth_data[4]["user_id"],
f"Generated a new API token for the key {key.name} "
f"from user with UID: {key.user_id}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)
data_key = self.controller.authentication.generate(
@ -173,7 +173,7 @@ class ApiUsersUserKeyHandler(BaseApiHandler):
f"Added API key {data['name']} with crafty permissions "
f"{data['crafty_permissions_mask']}"
f" and {data['server_permissions_mask']} for user with UID: {user_id}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)
self.finish_json(200, {"status": "ok", "data": {"id": key_id}})
@ -233,7 +233,7 @@ class ApiUsersUserKeyHandler(BaseApiHandler):
auth_data[4]["user_id"],
f"Removed API key {target_key} "
f"(ID: {key_id}) from user {auth_data[4]['user_id']}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)

View File

@ -94,7 +94,7 @@ class ApiUsersUserIndexHandler(BaseApiHandler):
self.controller.management.add_to_audit_log(
user["user_id"],
f"deleted the user {user_id}",
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)
@ -283,7 +283,7 @@ class ApiUsersUserIndexHandler(BaseApiHandler):
f"edited user {user_obj.username} (UID: {user_id})"
f"with roles {user_obj.roles}"
),
server_id=0,
server_id=None,
source_ip=self.get_remote_ip(),
)

View File

@ -147,7 +147,7 @@ class ServerHandler(BaseHandler):
page_data["server_api"] = False
if page_data["online"]:
page_data["server_api"] = self.helper.check_address_status(
"https://serverjars.com/api/fetchTypes"
"https://api.serverjars.com"
)
page_data["server_types"] = self.controller.server_jars.get_serverjar_data()
page_data["js_server_types"] = json.dumps(

View File

@ -55,7 +55,7 @@ class WebSocketHandler(tornado.websocket.WebSocketHandler):
self.controller.management.add_to_audit_log_raw(
"unknown",
0,
0,
None,
"Someone tried to connect via WebSocket without proper authentication",
self.get_remote_ip(),
)

View File

@ -54,9 +54,6 @@ def migrate(migrator: Migrator, database, **kwargs):
database = db
try:
logger.info("Migrating Data from Int to UUID (Type Change)")
Console.info("Migrating Data from Int to UUID (Type Change)")
# Changes on Server Table
migrator.alter_column_type(
Servers,
@ -87,11 +84,6 @@ def migrate(migrator: Migrator, database, **kwargs):
),
)
migrator.run()
logger.info("Migrating Data from Int to UUID (Type Change) : SUCCESS")
Console.info("Migrating Data from Int to UUID (Type Change) : SUCCESS")
except Exception as ex:
logger.error("Error while migrating Data from Int to UUID (Type Change)")
logger.error(ex)
@ -101,118 +93,6 @@ def migrate(migrator: Migrator, database, **kwargs):
last_migration.delete()
return
try:
logger.info("Migrating Data from Int to UUID (Foreign Keys)")
Console.info("Migrating Data from Int to UUID (Foreign Keys)")
# Changes on Audit Log Table
for audit_log in AuditLog.select():
old_server_id = audit_log.server_id_id
if old_server_id == "0" or old_server_id is None:
server_uuid = None
else:
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
AuditLog.update(server_id=server_uuid).where(
AuditLog.audit_id == audit_log.audit_id
).execute()
# Changes on Webhooks Log Table
for webhook in Webhooks.select():
old_server_id = webhook.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
Webhooks.update(server_id=server_uuid).where(
Webhooks.id == webhook.id
).execute()
# Changes on Schedules Log Table
for schedule in Schedules.select():
old_server_id = schedule.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
Schedules.update(server_id=server_uuid).where(
Schedules.schedule_id == schedule.schedule_id
).execute()
# Changes on Backups Log Table
for backup in Backups.select():
old_server_id = backup.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
Backups.update(server_id=server_uuid).where(
Backups.server_id == old_server_id
).execute()
# Changes on RoleServers Log Table
for role_servers in RoleServers.select():
old_server_id = role_servers.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
RoleServers.update(server_id=server_uuid).where(
RoleServers.role_id == role_servers.id
and RoleServers.server_id == old_server_id
).execute()
logger.info("Migrating Data from Int to UUID (Foreign Keys) : SUCCESS")
Console.info("Migrating Data from Int to UUID (Foreign Keys) : SUCCESS")
except Exception as ex:
logger.error("Error while migrating Data from Int to UUID (Foreign Keys)")
logger.error(ex)
Console.error("Error while migrating Data from Int to UUID (Foreign Keys)")
Console.error(ex)
last_migration = MigrateHistory.get_by_id(MigrateHistory.select().count())
last_migration.delete()
return
try:
logger.info("Migrating Data from Int to UUID (Primary Keys)")
Console.info("Migrating Data from Int to UUID (Primary Keys)")
# Migrating servers from the old id type to the new one
for server in Servers.select():
Servers.update(server_id=server.server_uuid).where(
Servers.server_id == server.server_id
).execute()
logger.info("Migrating Data from Int to UUID (Primary Keys) : SUCCESS")
Console.info("Migrating Data from Int to UUID (Primary Keys) : SUCCESS")
except Exception as ex:
logger.error("Error while migrating Data from Int to UUID (Primary Keys)")
logger.error(ex)
Console.error("Error while migrating Data from Int to UUID (Primary Keys)")
Console.error(ex)
last_migration = MigrateHistory.get_by_id(MigrateHistory.select().count())
last_migration.delete()
return
# Changes on Server Table
logger.info("Migrating Data from Int to UUID (Removing UUID Field from Servers)")
Console.info("Migrating Data from Int to UUID (Removing UUID Field from Servers)")
migrator.drop_columns("servers", ["server_uuid"])
migrator.run()
logger.info(
"Migrating Data from Int to UUID (Removing UUID Field from Servers) : SUCCESS"
)
Console.info(
"Migrating Data from Int to UUID (Removing UUID Field from Servers) : SUCCESS"
)
return

View File

@ -0,0 +1,326 @@
import datetime
import uuid
import peewee
import logging
from app.classes.shared.console import Console
from app.classes.shared.migration import Migrator, MigrateHistory
from app.classes.models.management import (
AuditLog,
Webhooks,
Schedules,
Backups,
)
from app.classes.models.server_permissions import RoleServers
from app.classes.models.base_model import BaseModel
logger = logging.getLogger(__name__)
def migrate(migrator: Migrator, database, **kwargs):
"""
Write your migrations here.
"""
db = database
# **********************************************************************************
# Servers New Model from Old (easier to migrate without dunmping Database)
# **********************************************************************************
class Servers(peewee.Model):
server_id = peewee.CharField(primary_key=True, default=str(uuid.uuid4()))
created = peewee.DateTimeField(default=datetime.datetime.now)
server_uuid = peewee.CharField(default="", index=True)
server_name = peewee.CharField(default="Server", index=True)
path = peewee.CharField(default="")
backup_path = peewee.CharField(default="")
executable = peewee.CharField(default="")
log_path = peewee.CharField(default="")
execution_command = peewee.CharField(default="")
auto_start = peewee.BooleanField(default=0)
auto_start_delay = peewee.IntegerField(default=10)
crash_detection = peewee.BooleanField(default=0)
stop_command = peewee.CharField(default="stop")
executable_update_url = peewee.CharField(default="")
server_ip = peewee.CharField(default="127.0.0.1")
server_port = peewee.IntegerField(default=25565)
logs_delete_after = peewee.IntegerField(default=0)
type = peewee.CharField(default="minecraft-java")
show_status = peewee.BooleanField(default=1)
created_by = peewee.IntegerField(default=-100)
shutdown_timeout = peewee.IntegerField(default=60)
ignored_exits = peewee.CharField(default="0")
class Meta:
table_name = "servers"
database = db
this_migration = MigrateHistory.get_or_none(
MigrateHistory.name == "20240217_rework_servers_uuid_part2"
)
if this_migration is not None:
Console.debug("Update database already done, skipping this part")
return
else:
servers_columns = db.get_columns("servers")
if not any(
column_data.name == "server_uuid" for column_data in servers_columns
):
Console.debug(
"Servers.server_uuid already deleted in Crafty version 4.3.0, skipping this part"
)
return
try:
logger.info("Migrating Data from Int to UUID (Foreign Keys)")
Console.info("Migrating Data from Int to UUID (Foreign Keys)")
# Changes on Audit Log Table
for audit_log in AuditLog.select():
old_server_id = audit_log.server_id_id
if old_server_id == "0" or old_server_id is None:
server_uuid = None
else:
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
AuditLog.update(server_id=server_uuid).where(
AuditLog.audit_id == audit_log.audit_id
).execute()
# Changes on Webhooks Log Table
for webhook in Webhooks.select():
old_server_id = webhook.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
Webhooks.update(server_id=server_uuid).where(
Webhooks.id == webhook.id
).execute()
# Changes on Schedules Log Table
for schedule in Schedules.select():
old_server_id = schedule.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
Schedules.update(server_id=server_uuid).where(
Schedules.schedule_id == schedule.schedule_id
).execute()
# Changes on Backups Log Table
for backup in Backups.select():
old_server_id = backup.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
Backups.update(server_id=server_uuid).where(
Backups.server_id == old_server_id
).execute()
# Changes on RoleServers Log Table
for role_servers in RoleServers.select():
old_server_id = role_servers.server_id_id
try:
server = Servers.get_by_id(old_server_id)
server_uuid = server.server_uuid
except:
server_uuid = old_server_id
RoleServers.update(server_id=server_uuid).where(
RoleServers.role_id == role_servers.id
and RoleServers.server_id == old_server_id
).execute()
logger.info("Migrating Data from Int to UUID (Foreign Keys) : SUCCESS")
Console.info("Migrating Data from Int to UUID (Foreign Keys) : SUCCESS")
except Exception as ex:
logger.error("Error while migrating Data from Int to UUID (Foreign Keys)")
logger.error(ex)
Console.error("Error while migrating Data from Int to UUID (Foreign Keys)")
Console.error(ex)
last_migration = MigrateHistory.get_by_id(MigrateHistory.select().count())
last_migration.delete()
return
try:
logger.info("Migrating Data from Int to UUID (Primary Keys)")
Console.info("Migrating Data from Int to UUID (Primary Keys)")
# Migrating servers from the old id type to the new one
for server in Servers.select():
Servers.update(server_id=server.server_uuid).where(
Servers.server_id == server.server_id
).execute()
logger.info("Migrating Data from Int to UUID (Primary Keys) : SUCCESS")
Console.info("Migrating Data from Int to UUID (Primary Keys) : SUCCESS")
except Exception as ex:
logger.error("Error while migrating Data from Int to UUID (Primary Keys)")
logger.error(ex)
Console.error("Error while migrating Data from Int to UUID (Primary Keys)")
Console.error(ex)
last_migration = MigrateHistory.get_by_id(MigrateHistory.select().count())
last_migration.delete()
return
return
def rollback(migrator: Migrator, database, **kwargs):
"""
Write your rollback migrations here.
"""
db = database
# Condition to prevent running rollback each time we've got a rollback to do
this_migration = MigrateHistory.get_or_none(
MigrateHistory.name == "20240217_rework_servers_uuid_part2"
)
if this_migration is None:
Console.debug("Update database already done, skipping this part")
return
# **********************************************************************************
# Servers New Model from Old (easier to migrate without dunmping Database)
# **********************************************************************************
class Servers(peewee.Model):
server_id = peewee.CharField(primary_key=True, default=str(uuid.uuid4()))
created = peewee.DateTimeField(default=datetime.datetime.now)
server_uuid = peewee.CharField(default="", index=True)
server_name = peewee.CharField(default="Server", index=True)
path = peewee.CharField(default="")
backup_path = peewee.CharField(default="")
executable = peewee.CharField(default="")
log_path = peewee.CharField(default="")
execution_command = peewee.CharField(default="")
auto_start = peewee.BooleanField(default=0)
auto_start_delay = peewee.IntegerField(default=10)
crash_detection = peewee.BooleanField(default=0)
stop_command = peewee.CharField(default="stop")
executable_update_url = peewee.CharField(default="")
server_ip = peewee.CharField(default="127.0.0.1")
server_port = peewee.IntegerField(default=25565)
logs_delete_after = peewee.IntegerField(default=0)
type = peewee.CharField(default="minecraft-java")
show_status = peewee.BooleanField(default=1)
created_by = peewee.IntegerField(default=-100)
shutdown_timeout = peewee.IntegerField(default=60)
ignored_exits = peewee.CharField(default="0")
class Meta:
table_name = "servers"
database = db
try:
logger.info("Migrating Data from UUID to Int (Primary Keys)")
Console.info("Migrating Data from UUID to Int (Primary Keys)")
# Migrating servers from the old id type to the new one
new_id = 0
for server in Servers.select():
new_id += 1
Servers.update(server_uuid=server.server_id).where(
Servers.server_id == server.server_id
).execute()
Servers.update(server_id=new_id).where(
Servers.server_id == server.server_id
).execute()
logger.info("Migrating Data from UUID to Int (Primary Keys) : SUCCESS")
Console.info("Migrating Data from UUID to Int (Primary Keys) : SUCCESS")
except Exception as ex:
logger.error("Error while migrating Data from UUID to Int (Primary Keys)")
logger.error(ex)
Console.error("Error while migrating Data from UUID to Int (Primary Keys)")
Console.error(ex)
last_migration = MigrateHistory.get_by_id(MigrateHistory.select().count())
last_migration.delete()
return
try:
logger.info("Migrating Data from UUID to Int (Foreign Keys)")
Console.info("Migrating Data from UUID to Int (Foreign Keys)")
# Changes on Audit Log Table
for audit_log in AuditLog.select():
old_server_id = audit_log.server_id_id
if old_server_id is None:
new_server_id = 0
else:
try:
server = Servers.get_or_none(Servers.server_uuid == old_server_id)
new_server_id = server.server_id
except:
new_server_id = old_server_id
AuditLog.update(server_id=new_server_id).where(
AuditLog.audit_id == audit_log.audit_id
).execute()
# Changes on Webhooks Log Table
for webhook in Webhooks.select():
old_server_id = webhook.server_id_id
try:
server = Servers.get_or_none(Servers.server_uuid == old_server_id)
new_server_id = server.server_id
except:
new_server_id = old_server_id
Webhooks.update(server_id=new_server_id).where(
Webhooks.id == webhook.id
).execute()
# Changes on Schedules Log Table
for schedule in Schedules.select():
old_server_id = schedule.server_id_id
try:
server = Servers.get_or_none(Servers.server_uuid == old_server_id)
new_server_id = server.server_id
except:
new_server_id = old_server_id
Schedules.update(server_id=new_server_id).where(
Schedules.schedule_id == schedule.schedule_id
).execute()
# Changes on Backups Log Table
for backup in Backups.select():
old_server_id = backup.server_id_id
try:
server = Servers.get_or_none(Servers.server_uuid == old_server_id)
new_server_id = server.server_id
except:
new_server_id = old_server_id
Backups.update(server_id=new_server_id).where(
Backups.server_id == old_server_id
).execute()
# Changes on RoleServers Log Table
for role_servers in RoleServers.select():
old_server_id = role_servers.server_id_id
try:
server = Servers.get_or_none(Servers.server_uuid == old_server_id)
new_server_id = server.server_id
except:
new_server_id = old_server_id
RoleServers.update(server_id=new_server_id).where(
RoleServers.role_id == role_servers.id
and RoleServers.server_id == old_server_id
).execute()
logger.info("Migrating Data from UUID to Int (Foreign Keys) : SUCCESS")
Console.info("Migrating Data from UUID to Int (Foreign Keys) : SUCCESS")
except Exception as ex:
logger.error("Error while migrating Data from UUID to Int (Foreign Keys)")
logger.error(ex)
Console.error("Error while migrating Data from UUID to Int (Foreign Keys)")
Console.error(ex)
last_migration = MigrateHistory.get_by_id(MigrateHistory.select().count())
last_migration.delete()
return
return

View File

@ -7,6 +7,7 @@ from app.classes.shared.console import Console
from app.classes.shared.migration import Migrator, MigrateHistory
from app.classes.models.management import Schedules, Backups
from app.classes.models.server_permissions import RoleServers
from app.classes.models.servers import Servers
logger = logging.getLogger(__name__)
@ -17,40 +18,7 @@ def migrate(migrator: Migrator, database, **kwargs):
"""
db = database
# **********************************************************************************
# Servers New Model from Old (easier to migrate without dunmping Database)
# **********************************************************************************
class Servers(peewee.Model):
server_id = peewee.CharField(primary_key=True, default=str(uuid.uuid4()))
created = peewee.DateTimeField(default=datetime.datetime.now)
server_name = peewee.CharField(default="Server", index=True)
path = peewee.CharField(default="")
backup_path = peewee.CharField(default="")
executable = peewee.CharField(default="")
log_path = peewee.CharField(default="")
execution_command = peewee.CharField(default="")
auto_start = peewee.BooleanField(default=0)
auto_start_delay = peewee.IntegerField(default=10)
crash_detection = peewee.BooleanField(default=0)
stop_command = peewee.CharField(default="stop")
executable_update_url = peewee.CharField(default="")
server_ip = peewee.CharField(default="127.0.0.1")
server_port = peewee.IntegerField(default=25565)
logs_delete_after = peewee.IntegerField(default=0)
type = peewee.CharField(default="minecraft-java")
show_status = peewee.BooleanField(default=1)
created_by = peewee.IntegerField(default=-100)
shutdown_timeout = peewee.IntegerField(default=60)
ignored_exits = peewee.CharField(default="0")
class Meta:
table_name = "servers"
database = db
try:
logger.info("Migrating Data from Int to UUID (Fixing Issue)")
Console.info("Migrating Data from Int to UUID (Fixing Issue)")
# Changes on Servers Roles Table
migrator.alter_column_type(
RoleServers,
@ -87,10 +55,13 @@ def migrate(migrator: Migrator, database, **kwargs):
),
)
migrator.run()
logger.info("Migrating Data from Int to UUID (Fixing Issue) : SUCCESS")
Console.info("Migrating Data from Int to UUID (Fixing Issue) : SUCCESS")
# Drop Column after migration
servers_columns = db.get_columns("servers")
if any(column_data.name == "server_uuid" for column_data in servers_columns):
Console.debug(
"Servers.server_uuid not deleted before Crafty version 4.3.2, skipping this part"
)
migrator.drop_columns("servers", ["server_uuid"])
except Exception as ex:
logger.error("Error while migrating Data from Int to UUID (Fixing Issue)")
@ -130,3 +101,7 @@ def rollback(migrator: Migrator, database, **kwargs):
"server_id",
peewee.IntegerField(null=True),
)
migrator.add_columns(
"servers", server_uuid=peewee.CharField(default="", index=True)
) # Recreating the column for roll back

View File

@ -18,5 +18,5 @@ termcolor==1.1
tornado==6.3.3
tzlocal==5.1
jsonschema==4.19.1
orjson==3.9.7
orjson==3.9.15
prometheus-client==0.17.1