šŸ” Code Extractor

class FolderDebugger

Maturity: 46

A debugging utility class for analyzing and troubleshooting folder structure and visibility issues in the reMarkable cloud sync system.

File:
/tf/active/vicechatdev/e-ink-llm/cloudtest/debug_gpt_in_folder.py
Lines:
12 - 197
Complexity:
moderate

Purpose

FolderDebugger provides comprehensive diagnostic capabilities for investigating folder structure problems in the reMarkable cloud storage. It authenticates with the reMarkable API, retrieves and analyzes the root document schema, examines specific folder structures (particularly the 'gpt_in' folder), finds documents within folders, and checks sync status. This class is designed for debugging scenarios where folders or documents are not appearing correctly in the reMarkable web interface or devices.

Source Code

class FolderDebugger:
    """Debug folder structure and visibility issues"""
    
    def __init__(self):
        # Load auth session
        auth = RemarkableAuth()
        self.session = auth.get_authenticated_session()
        
        if not self.session:
            raise RuntimeError("Failed to authenticate with reMarkable")
        
        print("šŸ”„ Folder Debugger Initialized")
    
    def get_root_info(self):
        """Get current root.docSchema info"""
        print("\nšŸ“‹ Getting current root.docSchema...")
        
        # Get root info
        root_response = self.session.get("https://eu.tectonic.remarkable.com/sync/v4/root")
        root_response.raise_for_status()
        root_data = root_response.json()
        
        # Get root content
        root_content_response = self.session.get(f"https://eu.tectonic.remarkable.com/sync/v3/files/{root_data['hash']}")
        root_content_response.raise_for_status()
        root_content = root_content_response.text
        
        print(f"āœ… Current root hash: {root_data['hash']}")
        print(f"āœ… Current generation: {root_data.get('generation')}")
        
        return root_data, root_content
    
    def analyze_gpt_in_folder(self, root_content: str):
        """Analyze the gpt_in folder in detail"""
        print(f"\nšŸ“ Analyzing gpt_in folder structure...")
        
        gpt_in_uuid = "99c6551f-2855-44cf-a4e4-c9c586558f42"
        
        # Find gpt_in folder in root
        lines = root_content.strip().split('\n')
        gpt_in_entry = None
        
        for line in lines[1:]:  # Skip version header
            if gpt_in_uuid in line:
                parts = line.split(':')
                if len(parts) >= 5:
                    gpt_in_entry = {
                        'hash': parts[0],
                        'uuid': parts[2],
                        'type': parts[3],
                        'size': parts[4],
                        'full_line': line
                    }
                    break
        
        if not gpt_in_entry:
            print(f"āŒ gpt_in folder not found in root.docSchema!")
            return None
            
        print(f"āœ… Found gpt_in folder entry:")
        print(f"   Hash: {gpt_in_entry['hash']}")
        print(f"   UUID: {gpt_in_entry['uuid']}")
        print(f"   Type: {gpt_in_entry['type']}")
        print(f"   Size: {gpt_in_entry['size']}")
        print(f"   Full line: {gpt_in_entry['full_line']}")
        
        # Get gpt_in folder's docSchema
        print(f"\nšŸ“„ Getting gpt_in folder docSchema...")
        folder_response = self.session.get(f"https://eu.tectonic.remarkable.com/sync/v3/files/{gpt_in_entry['hash']}")
        folder_response.raise_for_status()
        folder_content = folder_response.text
        
        print(f"āœ… gpt_in docSchema size: {len(folder_content)} bytes")
        print(f"šŸ“„ gpt_in docSchema content:")
        
        folder_lines = folder_content.strip().split('\n')
        for i, line in enumerate(folder_lines):
            print(f"   Line {i}: {line}")
        
        # Get gpt_in folder metadata
        metadata_hash = None
        for line in folder_lines[1:]:  # Skip version
            if ':' in line and '.metadata' in line:
                parts = line.split(':')
                if len(parts) >= 5:
                    metadata_hash = parts[0]
                    break
        
        if metadata_hash:
            print(f"\nšŸ“ Getting gpt_in folder metadata...")
            metadata_response = self.session.get(f"https://eu.tectonic.remarkable.com/sync/v3/files/{metadata_hash}")
            metadata_response.raise_for_status()
            folder_metadata = json.loads(metadata_response.text)
            
            print(f"āœ… gpt_in folder metadata:")
            for key, value in folder_metadata.items():
                print(f"   {key}: {value}")
            
            return gpt_in_entry, folder_metadata
        else:
            print(f"āŒ Could not find metadata for gpt_in folder")
            return gpt_in_entry, None
    
    def find_documents_in_folder(self, root_content: str, folder_uuid: str):
        """Find all documents that claim to be in the specified folder"""
        print(f"\nšŸ” Finding documents with parent '{folder_uuid}'...")
        
        documents_in_folder = []
        lines = root_content.strip().split('\n')
        
        for line in lines[1:]:  # Skip version header
            if ':' in line:
                parts = line.split(':')
                if len(parts) >= 5:
                    doc_uuid = parts[2]
                    doc_hash = parts[0]
                    doc_type = parts[3]
                    
                    # Skip the folder itself
                    if doc_uuid == folder_uuid:
                        continue
                    
                    # Get document metadata to check parent
                    try:
                        doc_response = self.session.get(f"https://eu.tectonic.remarkable.com/sync/v3/files/{doc_hash}")
                        doc_response.raise_for_status()
                        doc_content = doc_response.text
                        
                        # Find metadata hash in document schema
                        doc_lines = doc_content.strip().split('\n')
                        metadata_hash = None
                        for doc_line in doc_lines[1:]:
                            if ':' in doc_line and '.metadata' in doc_line:
                                metadata_parts = doc_line.split(':')
                                if len(metadata_parts) >= 5:
                                    metadata_hash = metadata_parts[0]
                                    break
                        
                        if metadata_hash:
                            metadata_response = self.session.get(f"https://eu.tectonic.remarkable.com/sync/v3/files/{metadata_hash}")
                            metadata_response.raise_for_status()
                            metadata = json.loads(metadata_response.text)
                            
                            if metadata.get('parent') == folder_uuid:
                                documents_in_folder.append({
                                    'uuid': doc_uuid,
                                    'hash': doc_hash,
                                    'type': doc_type,
                                    'name': metadata.get('visibleName', 'Unknown'),
                                    'parent': metadata.get('parent'),
                                    'deleted': metadata.get('deleted', False)
                                })
                    
                    except Exception as e:
                        print(f"   āš ļø Could not check document {doc_uuid[:8]}...: {e}")
                        continue
        
        print(f"āœ… Found {len(documents_in_folder)} documents in folder '{folder_uuid}':")
        for doc in documents_in_folder:
            status = "šŸ—‘ļø DELETED" if doc['deleted'] else "āœ… Active"
            print(f"   {status} {doc['name']} ({doc['uuid'][:8]}...)")
        
        return documents_in_folder
    
    def check_web_app_sync_status(self):
        """Check if there are any sync-related issues"""
        print(f"\n🌐 Checking web app sync indicators...")
        
        # Check if there are any pending sync operations
        try:
            sync_response = self.session.get("https://eu.tectonic.remarkable.com/sync/v4/root")
            sync_response.raise_for_status()
            sync_data = sync_response.json()
            
            print(f"āœ… Current sync generation: {sync_data.get('generation')}")
            print(f"āœ… Current sync hash: {sync_data.get('hash')}")
            
            # Check for any broadcast flags or sync indicators
            if 'broadcast' in sync_data:
                print(f"šŸ“” Broadcast flag: {sync_data['broadcast']}")
            
            return sync_data
            
        except Exception as e:
            print(f"āŒ Could not check sync status: {e}")
            return None

Parameters

Name Type Default Kind
bases - -

Parameter Details

__init__: The constructor takes no parameters. It automatically initializes authentication with the reMarkable service using RemarkableAuth, obtains an authenticated session, and prints a confirmation message. Raises RuntimeError if authentication fails.

Return Value

Instantiation returns a FolderDebugger object with an authenticated session. Methods return various data structures: get_root_info() returns a tuple of (root_data dict, root_content string); analyze_gpt_in_folder() returns a tuple of (gpt_in_entry dict, folder_metadata dict) or None; find_documents_in_folder() returns a list of document dictionaries; check_web_app_sync_status() returns sync_data dict or None.

Class Interface

Methods

__init__(self)

Purpose: Initialize the FolderDebugger with an authenticated reMarkable session

Returns: None (constructor)

get_root_info(self) -> tuple

Purpose: Retrieve the current root.docSchema information including hash and generation

Returns: Tuple of (root_data dict containing hash and generation, root_content string with full schema)

analyze_gpt_in_folder(self, root_content: str) -> tuple | None

Purpose: Analyze the gpt_in folder structure in detail, including its docSchema and metadata

Parameters:

  • root_content: The root document schema content as a string, obtained from get_root_info()

Returns: Tuple of (gpt_in_entry dict with hash/uuid/type/size, folder_metadata dict) or None if folder not found

find_documents_in_folder(self, root_content: str, folder_uuid: str) -> list

Purpose: Find all documents that have the specified folder as their parent

Parameters:

  • root_content: The root document schema content as a string
  • folder_uuid: The UUID of the folder to search for documents in

Returns: List of dictionaries, each containing uuid, hash, type, name, parent, and deleted status of documents

check_web_app_sync_status(self) -> dict | None

Purpose: Check the current sync status and generation from the reMarkable API

Returns: Dictionary containing sync generation, hash, and optional broadcast flags, or None if check fails

Attributes

Name Type Description Scope
session requests.Session Authenticated HTTP session for making API requests to reMarkable cloud services instance

Dependencies

  • json
  • requests

Required Imports

import json
from auth import RemarkableAuth

Usage Example

# Initialize the debugger
debugger = FolderDebugger()

# Get root document schema information
root_data, root_content = debugger.get_root_info()

# Analyze the gpt_in folder structure
gpt_in_entry, folder_metadata = debugger.analyze_gpt_in_folder(root_content)

# Find all documents in the gpt_in folder
folder_uuid = '99c6551f-2855-44cf-a4e4-c9c586558f42'
documents = debugger.find_documents_in_folder(root_content, folder_uuid)

# Check sync status
sync_status = debugger.check_web_app_sync_status()

Best Practices

  • Always instantiate FolderDebugger in a try-except block to handle authentication failures gracefully
  • Call get_root_info() first to obtain the root_content needed by other analysis methods
  • The class makes multiple API calls, so be mindful of rate limiting and network latency
  • Methods print diagnostic information to stdout, making them suitable for interactive debugging sessions
  • The session attribute maintains authentication state and should not be modified directly
  • Methods may raise HTTP exceptions if API calls fail; wrap calls in try-except blocks for production use
  • The class is stateless except for the session; each method can be called independently after initialization
  • analyze_gpt_in_folder() is hardcoded to analyze a specific folder UUID; modify for other folders as needed

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function main_v85 81.9% similar

    Diagnostic function that debugs visibility issues with the 'gpt_in' folder in a reMarkable tablet's file system by analyzing folder metadata, document contents, and sync status.

    From: /tf/active/vicechatdev/e-ink-llm/cloudtest/debug_gpt_in_folder.py
  • function check_gpt_in_folder 73.0% similar

    Diagnostic function that checks the status, metadata, and contents of a specific 'gpt_in' folder in the reMarkable cloud storage system, providing detailed analysis of folder structure and potential sync issues.

    From: /tf/active/vicechatdev/e-ink-llm/cloudtest/check_gpt_in_folder.py
  • function test_remarkable_discovery 68.1% similar

    Asynchronous test function that verifies reMarkable cloud folder discovery functionality by initializing a RemarkableCloudWatcher and attempting to locate the 'gpt_out' folder.

    From: /tf/active/vicechatdev/e-ink-llm/test_mixed_mode.py
  • function verify_document_status 67.2% similar

    Verifies the current status and metadata of a specific test document in the reMarkable cloud sync system by querying the sync API endpoints and analyzing the document's location and properties.

    From: /tf/active/vicechatdev/e-ink-llm/cloudtest/verify_document_status.py
  • function main_v113 67.0% similar

    Analyzes and compares .content files for PDF documents stored in reMarkable cloud storage, identifying differences between working and non-working documents.

    From: /tf/active/vicechatdev/e-ink-llm/cloudtest/analyze_content_files.py
← Back to Browse