IT audit, part 2: Three more areas you can’t afford to overlook in a system evaluation
In this second part of our IT audit series, we explore the next 3 critical system areas—software quality and technical debt, operational aspects and historical operational data.
During the transition of software system operation and development - or in the context of company acquisitions - the concept of IT system auditing often arises. But what does it involve? Why is it worth conducting? And how should such an audit process be envisioned? In this three-part series, we present nine key areas that should be considered during an IT audit. The second part provides an in-depth look at assessing software quality, operational aspects, and operational data.
Content: 3 more areas of an IT audit
4. Software quality assessment and technical debt
The maintainability of a software system and the resources required for further development are not only influenced by its size and complexity but also by the quality of its implementation. A well-designed, high-complexity system may demand fewer resources than a moderate-complexity system with poor implementation quality.
But what exactly defines software quality? How can it be measured? These are the key questions we will address in the following sections.
What is technical debt?
Technical debt refers to compromises made during development that may accelerate short‒term progress but create long‒term maintenance and scalability challenges. These trade‒offs may result from deliberate decisions to meet deadlines, or they may stem from a lack of expertise in system design and implementation.
As technical debt accumulates, maintaining and enhancing the system becomes more resource-intensive. Increased technical debt also means more time spent on daily operations and bug fixes, reducing the capacity available for developing new features.
Static code analysis tools play a crucial role in detecting and measuring technical debt. These tools scan the codebase and identify code smells—problematic coding patterns that can indicate deeper structural issues.
Architectural debt
Architectural debt arises from deficiencies in the system’s modular design and overall architecture.
Flaws in modular structure design
A poorly designed modular structure can significantly impact a system’s scalability, maintainability, and future development costs. For example, a monolithic architecture, where all components function as a single, tightly coupled unit, makes it challenging to scale, maintain, and introduce new features. Similarly, even in a modular system, if components are overly interdependent, changes to one module can have unintended consequences elsewhere. This increases the likelihood of regression errors and raises long-term development costs.
To identify architectural flaws, the following key metrics can be useful:
Number of dependencies (Fan-in / Fan-out): Measures the number of modules that reference a specific module (Fan-in) and the number of modules that a given module depends on (Fan-out). High values suggest tight coupling and potential architectural rigidity.
Coupling Between Objects (CBO): A high CBO value indicates strong dependencies between modules, increasing technical debt and maintenance difficulty.
Lack of Cohesion in Methods (LCOM): Low LCOM values indicate well-structured, cohesive classes that follow the single responsibility principle.
Module size and distribution: Analyze whether the system is properly divided into smaller, independent components.
Improper separation of business logic and I/O handling
When business logic and data handling (such as database queries, file operations, or API calls) are tightly coupled, modifications to one component can inadvertently affect multiple areas of the system. This lack of separation complicates automated testing, technology upgrades, and system scalability, ultimately resulting in a fragile and difficult-to-maintain architecture.
To ensure proper structure and maintainability, the following design patterns can be applied:
Separation of business logic – Implementing a Service Layer or Domain-Driven Design (DDD) helps isolate business rules from lower-level operations.
Repository pattern – Decouples the persistence layer, making it easier to switch database technologies without modifying business logic.
Dependency injection – Reduces tight coupling between layers, improving flexibility and testability.
A system lacking these design principles is a strong indicator of poor code structure and may require refactoring to enhance maintainability and long-term scalability.
Code quality debt
Beyond assessing architectural design, evaluating source code quality is a critical aspect of an IT audit. Poorly written code significantly increases maintenance difficulty.
To assess code quality, static code analysis tools can be utilized. Additionally, the following key aspects should be closely examined.
Poor naming conventions and unreadable code formatting
Inconsistent naming conventions and poorly formatted code contribute significantly to technical debt, reducing code clarity and increasing maintenance costs. If variable, class, or method names are unclear or misleading, it becomes difficult for new developers to understand the system. Likewise, inconsistent or improperly formatted code decreases readability and leads to inefficiencies in debugging and development.
To eliminate these issues, teams can adopt coding standards and best practices to ensure consistency, automated formatting tools such as Prettier (JavaScript, TypeScript) or Black (Python) or linting rules to enforce proper formatting and naming conventions.
Duplicated code
Duplicated code increases maintenance efforts, as any modification must be updated in multiple places, raising the risk of inconsistencies and errors. This redundancy reduces code clarity and maintainability, making future development more complex.
To detect and measure code duplication, the following key metrics can be used:
Percentage of duplicated code – The proportion of repeated code fragments within the overall codebase.
Number of code clones – The count of identical or nearly identical code segments.
High cyclomatic complexity
Excessively high cyclomatic complexity (CC), especially when widespread across the codebase, indicates that the code is difficult to understand and modify. A high CC value suggests that a function or module has too many decision paths, making it harder to test, debug, and maintain.
SOLID principles and technical debt
The SOLID principles serve as fundamental guidelines for maintainable and scalable software design. Violating these principles can contribute to technical debt, increasing maintenance costs, complexity, and regression risks.
Single Responsibility Principle (SRP): Each class should have only one responsibility. Breaking this principle results in high maintenance costs, as modifying one class can impact multiple functionalities.
Open/Closed Principle (OCP): Classes should be open for extension but closed for modification. If violated, introducing new functionality requires modifying existing code, increasing the risk of regression bugs.
Liskov Substitution Principle (LSP): Subclasses should be replaceable with their base classes without altering system behavior. If this principle is not followed, unexpected runtime errors and inconsistent behaviour can occur.
Interface Segregation Principle (ISP): Interfaces should not force implementing classes to include unused methods. Large, bloated interfaces create unnecessary dependencies and complicate code maintenance.
Dependency Inversion Principle (DIP): High-level modules should not depend on low-level modules; both should depend on abstractions. Violating this principle results in rigid, tightly coupled code that is difficult to test and modify.
Lack of design pattern utilisation
Failing to apply well-established design patterns (e.g., Factory, Service Locator, Strategy, Command, Adapter, Decorator, Observer) can lead to code inconsistencies, high coupling, and low cohesion. This issue becomes particularly problematic in larger projects, where a consistent structure and scalability are essential for long-term maintainability.
During an IT audit, it is crucial to assess the presence and appropriate use of design patterns, as they significantly contribute to code sustainability, scalability, and ease of development.
Missing or faulty exception handling
The quality of exception handling is critical to ensuring system stability. Poorly implemented exception management can lead to hidden errors, unexpected crashes, and difficult-to-trace issues.
Common pitfalls include:
Catching exceptions too broadly
Overly specific error handling, which might miss unforeseen exceptions, leading to unpredictable failures.
Lack of structured logging makes it difficult to track and diagnose issues in production environments.
To mitigate these risks, systems should implement:
Detailed logging mechanisms to capture relevant error information.
Standardized exception handling patterns, such as:
> Try-catch-finally for managing expected errors.
> Circuit Breaker Pattern to prevent cascading failures in distributed systems.
Hardcoded values and configurations
Hardcoded values and configurations reduce flexibility and increase maintenance costs. Storing database credentials, API endpoints, or configuration parameters directly in the code makes modifications cumbersome and raises the risk of errors and security vulnerabilities.
Best practices for handling configurations include:
Storing configurations in external files
Using environment variables to separate sensitive data from application logic
Using standard frameworks and libraries vs. custom implementations
During an IT audit, it is essential to assess whether standard frameworks and libraries have been utilized for non-business-specific tasks (e.g., logging, PDF generation, email handling) or if custom implementations were developed instead.
Leveraging widely used, well-documented libraries provides several advantages:
Reduced maintenance effort
Improved security and reliability
Better scalability and interoperability
Assessment of automated testing and test coverage
The stability and quality of a system are significantly enhanced when key business logic is well-covered by automated tests. During an IT audit, it is crucial to evaluate test coverage, both at the system-wide level and for individual modules.
Higher test coverage reduces the risk of introducing defects when making changes, as tests serve as an early warning system for potential issues.
Documentation review
Last but not least, the presence of up-to-date and relevant documentation is a key factor in system maintainability and knowledge transfer. High-level architectural and process documentation provides a clear understanding of system design and workflows. API and interface documentation ensures that system functionality is transparent and accessible to developers and integrators.
If documentation is outdated or incomplete, updating it may require significant effort. In such cases, tools such as static code analysis platforms and AI-driven solutions can assist in automatically generating documentation from the source code, at least partially.
The aspects discussed above contribute to a realistic assessment of software quality and associated risks. A thorough IT audit not only maps out the current state of the system but also enables organisations to take proactive steps to ensure that the software remains stable, sustainable, and cost-effective in the long run.
We uncover the hidden costs dragging your system down.
From undetected bugs to redundant processes, our tech audit reveals what others miss—so you can cut costs, boost performance, and finally get that peace of mind.
The foundation of efficient software operation lies in having the right infrastructure, a well-defined development workflow, and proper tooling to support it. Assessing the current setup in these areas can be a critical part of the audit process.
Infrastructure assessment
Physical or virtualized environment
As a first step, it is crucial to determine whether the system currently runs on physical servers, in a virtualized environment (VMs), on a cloud platform, or within a containerized setup. This distinction has significant implications for operations, as well as for any potential migration or scaling plans.
Resource requirements and utilization
It is important to assess the system’s resource demands - including CPU, GPU, memory, storage, and bandwidth - as well as the available capacity and the actual utilization ratio. If the system is already operating near the limits of its current infrastructure, scaling or expansion will likely become necessary in the near future.
Scaling physical servers typically involves higher costs and more complex changes. In contrast, expansion in a virtualized environment is generally more flexible and easier to implement.
In the context of scaling, the quality of the codebase and its readiness for scalability also become critical factors—an aspect previously addressed during the software quality assessment.
Monitoring and alerts
A robust monitoring and alerting system is essential for the stable and efficient operation of any software environment. As part of the audit, it is important to evaluate:
What monitoring tools are currently in place (if any)?
Which metrics are being tracked (e.g. hardware performance, service availability, business KPIs)?
How does the alerting mechanism work (what thresholds trigger alerts, who receives them, and in what format)?
If monitoring and alert systems are missing or insufficient, their implementation must be factored into the operational transition plan.
Backup and recovery processes
In the event of hardware failures, software issues, or unexpected incidents, the presence of reliable backup and recovery procedures is critical. When taking over the operation of a system, the following aspects must be reviewed:
Are there clearly defined backup policies in place?
How frequently are backups created, and where are they stored?
What is the process for performing a restore operation in case of failure?
If these procedures are missing or incomplete, establishing proper backup mechanisms will require additional effort and cost.
It is also essential to define and assess the system's:
RTO (Recovery Time Objective): The maximum acceptable time within which a system or service must be restored after an incident.
RPO (Recovery Point Objective): The maximum tolerable amount of data loss measured in time.
If the current backup and recovery setup does not meet these requirements, appropriate solutions must be planned and implemented to ensure operational resilience.
Version control, branching strategy, and CI/CD processes
The tools used for version control (such as Git), along with the applied branching strategy and development workflow, play a vital role in streamlining both development and operations. As part of the audit, it's important to assess:
What version control system is in use?
Are CI/CD (Continuous Integration / Continuous Delivery) pipelines in place?
Well-implemented CI/CD systems enable fast, reliable deployments and support the integration of automated testing, reducing the risk of human error and ensuring higher code quality.
If these practices or tools are missing, their implementation - along with the associated costs and effort—must be considered during the operational transition.
6. Analysis of historical operational data
Existing operational data can provide valuable insight into the current condition and maintainability of a software system.
By analyzing information from bug tracking systems and version control platforms, it is possible to identify patterns and trends that reveal the types and frequency of past issues, areas with high defect concentration, the pace and nature of recent development activities.
Bug tracker analysis
If the software uses a bug tracking system (such as Jira, Bugzilla, GitHub Issues, etc.) and issues are consistently recorded, it can serve as a rich source of operational insight. Key aspects to examine include:
1. List of open and recurring issues
Are there critical bugs that remain unresolved?
Are there recurring problems that suggest underlying infrastructure weaknesses or code quality issues?
2. Incident and outage history
What are the most common causes of system failures or performance bottlenecks?
How quickly and through what methods were these issues resolved?
3. Response and resolution times (SLA compliance)
How long does it take to respond to and resolve reported issues?
Do these times align with the response and resolution targets defined in the SLA?
4. Issue categorization and trends
What is the distribution of critical vs. low-priority bugs?
Is the number of incidents increasing or decreasing over time?
5. Workarounds and supporting documentation
Are there temporary fixes in use by developers or operations teams?
How sustainable are these workarounds, and is there a need for permanent solutions?
Version control analysis
The history stored in the version control system (e.g., Git) can reveal a great deal about the state of the software and the development process. By examining commit patterns and metadata, it is possible to identify problematic components or code areas using the following methods:
1. Code churn (Frequency of Change)
Files or modules that are frequently modified, refactored, or fixed often indicate poor design or lower code quality.
Tools: Git Analytics, CodeScene
2. Bug fix frequency and error correlation
Analyzing how often bug-fix commits occur can highlight the files or areas most prone to defects and instability.
If too many developers modify the same file or module, it may suggest that the component is overly complex, poorly documented, or lacks clear ownership.
Tools: Git blame, GitHub Insights
4. Large and complex commits
Oversized commits often point to monolithic or highly complex code blocks, which may benefit from refactoring into smaller, more maintainable units.
Tools: GitLens, GitStats
5. Neglected code and outdated dependencies
Identifying files that haven't been updated in a long time but still serve critical functions can uncover hidden risks in the codebase.
Tools: Git Age Analysis, CodeScene
The insights gained from this data can help estimate the scope of future operational and development tasks, while also highlighting potential weaknesses or risk areas within the system. As part of the audit, it is therefore strongly recommended to thoroughly review both bug tracker and version control data in order to make informed decisions regarding future maintenance and development strategies.
Read the last parts of our feature:
Part 3: Final steps for a full‒scope IT auditThe final chapter in our IT audit series dives into risk areas that can make—or break—your tech: data security, legal gaps, and AI‒powered tools.
Missed the beginning?
Go back to Part 1 and uncover how your tech really supports your business:
An IT audit s more than a technical review—it's your roadmap to smarter operations, stronger security, and scalable growth. Whether you're planning a system overhaul or just want to uncover hidden inefficiencies, our audit service gives you actionable insights tailored to your business. At LogiNet, we don’t just point out the gaps—we guide you toward sustainable solutions.
We partner with product owners and founders by developing products from scratch or growing the existing product. These real-world examples highlight how we help companies innovate and succeed.