Application Portfolio Management
Application portfolio management is the discipline of maintaining visibility into all software applications an organisation uses, assessing their value and risk, and making deliberate decisions about their future. The portfolio encompasses every application that supports organisational operations: cloud services, on-premises systems, custom-built tools, spreadsheet-based solutions, and shadow IT discovered through usage analysis. Effective portfolio management transforms a collection of independently acquired applications into a coherent technology estate that advances mission objectives while controlling costs and risks.
- Application Portfolio
- The complete set of software applications used by an organisation, including sanctioned enterprise systems, departmental tools, and discovered shadow IT.
- Rationalisation
- The process of reducing portfolio complexity by eliminating redundant applications, consolidating overlapping functionality, and retiring systems that no longer deliver value.
- Technical Debt
- The accumulated cost of maintaining applications that use outdated technologies, lack documentation, or require workarounds due to deferred modernisation.
- Business Capability
- A discrete function the organisation performs to achieve its mission, independent of how that function is currently implemented in technology.
- Application Lifecycle
- The progression of an application from initial deployment through active use, maintenance, and eventual retirement.
- Shadow IT
- Applications adopted by staff without IT department involvement, often SaaS tools procured on departmental budgets or free tiers.
Portfolio Structure
An application portfolio requires a structured data model that captures enough information to support assessment and decision-making without creating an administrative burden that renders the inventory perpetually incomplete. The core entity is the application record, which represents a distinct software system regardless of how many instances exist or where it runs.
+----------------------------------------------------------+| APPLICATION RECORD |+----------------------------------------------------------+| || +--------------------+ +--------------------+ || | Identity | | Classification | || | | | | || | - Name | | - Category | || | - Vendor | | - Business domain | || | - Version | | - Criticality | || | - Instance count | | - Data sensitivity | || +--------------------+ +--------------------+ || || +--------------------+ +--------------------+ || | Ownership | | Technical Profile | || | | | | || | - Business owner | | - Hosting model | || | - Technical owner | | - Architecture | || | - Support contact | | - Integrations | || | - Cost centre | | - Dependencies | || +--------------------+ +--------------------+ || || +--------------------+ +--------------------+ || | Financial | | Assessment | || | | | | || | - Annual cost | | - Business value | || | - User count | | - Technical health | || | - Cost per user | | - Risk score | || | - Contract end | | - Disposition | || +--------------------+ +--------------------+ || |+----------------------------------------------------------+Figure 1: Application record data model showing core attributes for portfolio management
The identity section captures what the application is and who provides it. Version tracking matters because organisations frequently run multiple versions simultaneously during migrations, and each version has different support status and security posture. Instance count distinguishes between a single deployment serving all users and multiple isolated deployments serving different country offices or programmes.
Classification attributes enable filtering and analysis across the portfolio. Category organises applications by function: productivity, finance, programme delivery, data management, and similar groupings that make sense for the organisation’s structure. Business domain links applications to the organisational units they serve, enabling impact analysis when restructuring occurs. Criticality uses a three or four-level scale that directly maps to recovery time objectives and change management requirements. Data sensitivity flags applications handling personal data, financial information, or protection-sensitive content.
Ownership attributes establish accountability. The business owner holds authority over the application’s purpose and user base, making decisions about functionality and access. The technical owner maintains the application, manages updates, and handles incidents. These roles sometimes overlap in small organisations but the distinction matters for governance. The support contact field captures who to call when something breaks, which differs from the technical owner for vendor-supported SaaS applications. Cost centre enables financial analysis and chargeback if the organisation uses that model.
The technical profile describes how the application runs. Hosting model distinguishes cloud SaaS from cloud IaaS from on-premises deployment. Architecture captures whether the application is monolithic, microservices-based, or a low-code platform build. The integrations list identifies connections to other applications in the portfolio, creating a dependency map that informs retirement planning and change impact analysis. Dependencies capture external systems the application requires: identity providers, databases, middleware, and third-party services.
Financial attributes enable cost analysis and optimisation. Annual cost includes licensing, hosting, support contracts, and estimated internal maintenance effort. User count tracks actual users, not licensed seats, enabling cost-per-user calculations that reveal efficiency variations. Contract end date drives renewal planning and creates natural decision points for rationalisation.
Assessment attributes record the outcomes of periodic portfolio review. Business value and technical health scores derive from the assessment frameworks described below. Risk score aggregates security, compliance, and operational risks. Disposition captures the current strategic decision for the application: invest, maintain, migrate, or retire.
Portfolio Discovery
Building an accurate inventory requires multiple discovery approaches because no single method captures all applications. Active discovery through network scanning identifies applications by their traffic patterns and DNS queries. Cloud access security brokers reveal SaaS applications accessed through the corporate network or identity provider. Financial discovery examines expense reports, procurement records, and credit card statements for software subscriptions. Survey-based discovery asks department heads and programme managers what applications their teams use.
+------------------------------------------------------------------+| DISCOVERY SOURCES |+------------------------------------------------------------------+| || Network Identity Provider Financial || Discovery Discovery Discovery || | | | || v v v || +--------+ +--------+ +--------+ || | DNS | | SSO | | Expense| || | logs | | audit | | reports| || +---+----+ +---+----+ +---+----+ || | | | || | +--------+ | +--------+ | || +--->| |<-----+---->| |<------+ || | Dedup | | Enrich | || | Match | | Verify | || +--->| |<-----+---->| |<------+ || | +---+----+ | +---+----+ | || | | | | | || +---+----+ | +---+----+ | +---+-----+ || | CASB | | | Survey | | | Contract| || | logs | | | results| | | database| || +--------+ | +--------+ | +---------+ || v v || +-----+---------------------+-----+ || | PORTFOLIO INVENTORY | || +---------------------------------+ || |+------------------------------------------------------------------+Figure 2: Portfolio discovery combining multiple sources to build complete inventory
Network discovery captures applications that generate distinctive traffic patterns. DNS logs reveal domain names accessed by users, and filtering for known SaaS provider domains identifies cloud application usage. This approach misses applications accessed only through mobile devices on cellular networks or personal computers outside the corporate network.
Identity provider discovery extracts application usage from single sign-on logs. Every application integrated with the organisation’s identity provider appears in authentication logs with user counts and access frequency. This source provides accurate usage data but misses applications using local authentication or accessed without SSO integration.
Financial discovery identifies applications by their costs. Procurement systems capture enterprise purchases. Expense reports reveal departmental subscriptions paid by individual staff and reimbursed. Credit card statements from corporate cards show recurring software charges. This approach captures paid applications regardless of technical integration but misses free-tier services and open source tools with no direct cost.
Survey-based discovery collects information directly from application users and owners. Surveys work for applications that evade technical discovery methods but suffer from incomplete responses and inconsistent naming. A programme team describing their “beneficiary database” and a country office reporting their “registration system” might reference the same application or entirely different tools.
Deduplication and matching reconcile discoveries from multiple sources into unique application records. The same application appears under different names: “Microsoft 365” in procurement records, “Office 365” in user surveys, and “outlook.office365.com” in DNS logs. Matching rules based on vendor names, domain patterns, and known aliases consolidate these into single records. Verification confirms each discovered application still exists in active use rather than representing abandoned trials or cancelled subscriptions.
Assessment Frameworks
Portfolio assessment evaluates each application across multiple dimensions to inform strategic decisions. The TIME framework categorises applications by their strategic trajectory: Tolerate, Invest, Migrate, or Eliminate. The business value and technical health matrix positions applications for prioritisation. Risk-based assessment identifies applications requiring immediate attention regardless of strategic classification.
The TIME framework assigns each application to one of four categories based on its current strategic direction:
Tolerate applications function adequately but receive no strategic investment. They meet current needs without excelling, and the organisation accepts their limitations rather than investing in improvement or replacement. Tolerate status applies to stable applications with moderate technical health that serve non-critical functions. A legacy reporting tool that produces monthly statistics might fall into Tolerate: it works, users know it, and replacing it offers insufficient benefit to justify the disruption.
Invest applications receive funding and attention to expand their capabilities or reach. Investment targets applications with strong strategic alignment, good technical foundations, and unrealised potential. An organisation expanding its M&E platform to additional programmes or integrating its CRM with new communication channels places those applications in Invest status.
Migrate applications serve important functions but their current implementation cannot continue. Migration applies when the underlying platform reaches end-of-life, when vendor support ceases, when the application cannot meet new compliance requirements, or when consolidation eliminates redundant systems. Migrate status triggers planning for replacement while the current application continues operating.
Eliminate applications provide insufficient value to justify their continued existence. Elimination targets applications with low usage, high costs relative to value, redundant functionality covered by other systems, or unacceptable risk levels. Eliminate status initiates retirement planning with defined timelines for data migration, user transition, and system decommissioning.
+------------------------------------------------------------------+| TIME ASSESSMENT MATRIX |+------------------------------------------------------------------+| || High | | | || | INVEST | MIGRATE | || B | | (modernise) | || u | - Strategic fit | | || s | - Growth potential | - Important function | || i | - Good foundation | - Poor technology | || n | | - Platform EOL | || e +------------------------+------------------------+ || s | | | || s | TOLERATE | ELIMINATE | || | | | || V | - Adequate function | - Low usage | || a | - Low strategic value | - High cost/risk | || l | - Acceptable health | - Redundant | || u | | | || e | | | || | | | || Low +------------------------+------------------------+ || | Good | Poor | || | | || | Technical Health | |+------------------------------------------------------------------+Figure 3: TIME framework positioning applications by business value and technical health
Business value assessment scores how effectively an application supports organisational mission and operations. Value derives from multiple factors: the number of users who depend on the application, the criticality of the business functions it enables, the availability of alternatives, and the strategic importance of the capabilities it provides. A case management system serving 200 protection staff across 15 country offices scores higher than an expense reporting tool used by 50 headquarters staff, even if both have similar user satisfaction ratings.
Technical health assessment evaluates the application’s operational condition and sustainability. Health factors include the age of the underlying technology stack, the availability of vendor support, the quality of documentation, the frequency of security updates, the stability of the hosting platform, and the availability of skills to maintain the system. An application built on a framework that reached end-of-life three years ago with a vendor that went bankrupt scores poorly regardless of how well it currently functions.
Scoring these dimensions requires structured evaluation. A five-point scale provides sufficient granularity without false precision. Business value scoring considers user reach (how many people depend on it), process criticality (what happens if it fails), strategic alignment (does it support priority initiatives), and replaceability (how difficult would replacement be). Technical health scoring considers technology currency (is the stack supported), operational stability (incident frequency and severity), security posture (vulnerability status and patch currency), and maintainability (documentation, skills availability, vendor support).
Worked Assessment Example
Consider a mid-sized humanitarian organisation assessing three applications in its programme systems portfolio:
Application A: Legacy Registration Database Built in 2016 on Microsoft Access, this database stores beneficiary registration data for the organisation’s largest programme. It holds 340,000 records and 12 staff access it daily. The Access runtime version dates to 2016, the database exceeds 1.8 GB (approaching Access limits), and no documentation exists beyond tribal knowledge held by one departing staff member.
Business value score: 4/5 (high user dependency, critical data, difficult to replace quickly) Technical health score: 1/5 (obsolete platform, no documentation, single point of failure, approaching technical limits) TIME disposition: Migrate (urgent priority)
Application B: Cloud-based M&E Platform Deployed in 2022, this SaaS platform serves 45 users across three country programmes. It integrates with the organisation’s data collection tools and produces donor reports. The vendor maintains active development with quarterly releases and responsive support.
Business value score: 4/5 (essential M&E function, good user adoption, strong donor reporting) Technical health score: 4/5 (current platform, vendor supported, good integration, adequate documentation) TIME disposition: Invest (expand to additional programmes)
Application C: Standalone Mapping Tool A desktop GIS application installed on four computers in the logistics team. Annual license cost of £2,400. Used intermittently for distribution planning. The organisation’s M&E platform now includes basic mapping functionality.
Business value score: 2/5 (limited users, infrequent use, functionality duplicated elsewhere) Technical health score: 3/5 (maintained software, but isolated from other systems) TIME disposition: Eliminate (consolidate mapping into M&E platform)
This assessment reveals the organisation should prioritise migration of the registration database before the departing staff member leaves, plan M&E platform expansion, and initiate retirement of the standalone mapping tool to recover £2,400 annually while reducing portfolio complexity.
Business Capability Mapping
Business capability mapping connects applications to the organisational functions they support, revealing gaps, redundancies, and dependencies that application-centric views obscure. A capability map starts with the organisation’s mission and decomposes it into discrete functions that must exist regardless of how they are implemented.
+-------------------------------------------------------------------+| CAPABILITY MAP STRUCTURE |+-------------------------------------------------------------------+| || MISSION: Deliver humanitarian assistance to crisis-affected || populations || || +-------------------------------+-------------------------------+|| | PROGRAMME DELIVERY | ORGANISATIONAL SUPPORT ||| +-------------------------------+-------------------------------+|| | | ||| | +---------------------------+ | +---------------------------+ ||| | | Needs Assessment | | | Financial Management | ||| | | - Data collection | | | - Accounting | ||| | | - Analysis | | | - Budgeting | ||| | | - Prioritisation | | | - Reporting | ||| | +---------------------------+ | +---------------------------+ ||| | | ||| | +---------------------------+ | +---------------------------+ ||| | | Beneficiary Management | | | Human Resources | ||| | | - Registration | | | - Recruitment | ||| | | - Identity verification | | | - Payroll | ||| | | - Deduplication | | | - Performance | ||| | +---------------------------+ | +---------------------------+ ||| | | ||| | +---------------------------+ | +---------------------------+ ||| | | Service Delivery | | | Procurement & Logistics | ||| | | - Cash assistance | | | - Sourcing | ||| | | - In-kind distribution | | | - Warehousing | ||| | | - Case management | | | - Transport | ||| | +---------------------------+ | +---------------------------+ ||| | | ||| | +---------------------------+ | +---------------------------+ ||| | | Monitoring & Evaluation | | | External Relations | ||| | | - Indicator tracking | | | - Donor management | ||| | | - Outcome measurement | | | - Communications | ||| | | - Reporting | | | - Fundraising | ||| | +---------------------------+ | +---------------------------+ ||| +-------------------------------+-------------------------------+|+-------------------------------------------------------------------+Figure 4: Capability map decomposing organisational mission into discrete functions
Mapping applications to capabilities reveals coverage and gaps. Each capability should have at least one application supporting it, and preferably only one to avoid redundancy. When multiple applications support the same capability, either they serve different contexts (country offices, programme types) or redundancy exists that rationalisation should address.
The mapping also reveals capability gaps where manual processes or workarounds compensate for missing system support. If the “Identity verification” capability has no supporting application, staff perform manual verification using paper documents or disconnected processes. Identifying these gaps guides investment priorities.
+-------------------------------------------------------------------+| CAPABILITY-APPLICATION MAPPING |+-------------------------------------------------------------------+| || Capability | Applications | Status || ------------------------+------------------------+---------------|| Data collection | KoboToolbox | Covered || Analysis | Excel, SPSS | Fragmented || Prioritisation | (manual process) | Gap || ------------------------+------------------------+---------------|| Registration | Access DB, CommCare | Redundant || Identity verification | (manual process) | Gap || Deduplication | (manual process) | Gap || ------------------------+------------------------+---------------|| Cash assistance | RedRose | Covered || In-kind distribution | (spreadsheets) | Gap || Case management | Primero | Covered || ------------------------+------------------------+---------------|| Indicator tracking | DHIS2 | Covered || Outcome measurement | DHIS2, Excel | Fragmented || Reporting | DHIS2, Power BI | Covered || |+-------------------------------------------------------------------+Figure 5: Capability-application mapping revealing gaps, redundancies, and fragmentation
This mapping exposes several issues requiring attention. The “Registration” capability has two applications (Access DB and CommCare) creating redundancy that complicates data management and increases maintenance burden. Multiple capabilities rely on manual processes, creating operational risk and limiting scalability. The “Analysis” and “Outcome measurement” capabilities show fragmentation across multiple tools without clear integration, suggesting data silos and inconsistent methodologies.
Lifecycle Planning
Applications progress through distinct lifecycle stages that determine appropriate management activities and investment levels. Understanding an application’s lifecycle position enables appropriate resource allocation and prevents both premature retirement of valuable systems and excessive investment in declining applications.
+------------------------------------------------------------------------------------+| APPLICATION LIFECYCLE |+------------------------------------------------------------------------------------+| || DEPLOY GROW MATURE DECLINE RETIRE || | | | | | || v v v v v || +--------+ +--------+ +--------+ +--------+ +--------+ || | | | | | | | | | | || | Pilot | | Scale | | Stable | | Legacy | | End | || | Test |--->| Expand |---->| Operate|---->| Manage |--->| of | || | Learn | | Adopt | | Optimise | Risk | | Life | || | | | | | | | | | | || +--------+ +--------+ +--------+ +--------+ +--------+ || || Investment: High High Low Low One-time || Focus: Function Adoption Efficiency Risk Exit || Change rate: High Medium Low Minimal Final || |+------------------------------------------------------------------------------------+Figure 6: Application lifecycle stages with characteristic investment and management patterns
The deployment stage encompasses initial implementation, pilot testing, and early learning. Investment runs high as the organisation configures the application, integrates it with existing systems, migrates data from predecessors, and trains users. Change frequency peaks during this stage as configuration adjusts to actual requirements discovered through use. Success metrics focus on functional completeness and user adoption rather than efficiency.
The growth stage expands the application to additional users, locations, or use cases. Investment remains high but shifts from implementation to scaling: additional licenses, infrastructure capacity, training programmes for new user groups, and integrations that extend the application’s reach. Change continues at moderate pace as new requirements emerge from expanded use.
The mature stage represents stable operation serving established user populations. Investment drops to maintenance levels: license renewals, security updates, minor enhancements, and operational support. The focus shifts to operational efficiency, cost optimisation, and service quality. Changes occur infrequently and follow rigorous change management to protect stability.
The decline stage signals that the application’s useful life approaches its end. The underlying technology ages beyond vendor support, user needs evolve beyond the application’s capabilities, or strategic decisions select a replacement. Investment during decline focuses on risk management: ensuring security patches apply until retirement, maintaining just enough documentation for transition, and planning the migration path. Changes are avoided except for critical security fixes.
The retirement stage executes the transition to a replacement or elimination of the capability. Investment occurs as a one-time event: data migration, user transition, decommissioning activities, and vendor contract termination. The Application Retirement task details this process.
Lifecycle stage informs appropriate governance responses. Deploying applications receive intensive project management attention. Mature applications require periodic review but not constant oversight. Declining applications need proactive retirement planning before crisis forces hurried transitions. Portfolio management tracks lifecycle stages across all applications and ensures appropriate management activities occur at each stage.
Rationalisation Strategies
Rationalisation reduces portfolio complexity through deliberate elimination of redundant, underperforming, or unnecessary applications. Complexity carries costs beyond direct licensing: each application requires security monitoring, access management, integration maintenance, backup configuration, and knowledge maintenance. Reducing portfolio size from 200 applications to 150 applications saves far more than the direct costs of the 50 retired applications.
Redundancy elimination consolidates multiple applications serving the same capability into a single strategic application. The capability mapping exercise identifies redundancy by revealing multiple applications supporting identical functions. Selection criteria for the surviving application include technical health, user adoption, strategic fit, total cost of ownership, and vendor viability. Migration planning moves users and data from eliminated applications to the consolidated platform.
Standardisation replaces varied approaches with consistent platforms across the organisation. Where country offices independently selected their own collaboration tools, standardisation adopts a single platform organisation-wide. Standardisation reduces training costs, simplifies support, enables consistent security configuration, and improves interoperability. The transition period supports multiple platforms while migration completes, then eliminates non-standard applications.
Feature consolidation expands the scope of existing applications to absorb functionality from separate tools. Modern platforms frequently include capabilities that organisations previously obtained through dedicated applications. A programme management system that adds basic M&E features might absorb a standalone indicator tracking tool. Consolidation requires evaluating whether the consolidated capability meets requirements adequately or whether specialised functionality justifies maintaining a separate application.
Shadow IT remediation brings unsanctioned applications under governance or replaces them with sanctioned alternatives. Discovery processes identify shadow IT, and remediation determines whether each discovered application fills a legitimate need unmet by the official portfolio. Legitimate needs prompt evaluation of whether to sanction the shadow application, replace it with a sanctioned alternative, or extend an existing application to cover the requirement. Applications that duplicate sanctioned functionality without justification proceed to elimination.
Governance Model
Portfolio governance establishes decision rights, review processes, and accountability for portfolio management activities. Without explicit governance, portfolio decisions occur ad hoc through procurement processes, project implementations, and departmental initiatives without coherent strategy.
+------------------------------------------------------------------+| GOVERNANCE STRUCTURE |+------------------------------------------------------------------+| || +------------------------------------------------------------+ || | TECHNOLOGY STEERING GROUP | || | | || | - Portfolio strategy approval | || | - Major investment decisions (>£50,000) | || | - Rationalisation programme oversight | || | - Annual portfolio review | || +------------------------------------------------------------+ || | || v || +------------------------------------------------------------+ || | IT LEADERSHIP TEAM | || | | || | - Portfolio management execution | || | - Application decisions (£10,000-£50,000) | || | - Assessment framework application | || | - Quarterly portfolio reviews | || +------------------------------------------------------------+ || | || +--------------------+--------------------+ || | | | || v v v || +--------------+ +--------------+ +--------------+ || | Application | | Application | | Application | || | Owner A | | Owner B | | Owner C | || | | | | | | || | - Day-to-day | | - Day-to-day | | - Day-to-day | || | management | | management | | management | || | - User | | - User | | - User | || | support | | support | | support | || | - Change | | - Change | | - Change | || | requests | | requests | | requests | || +--------------+ +--------------+ +--------------+ || |+------------------------------------------------------------------+Figure 7: Portfolio governance structure with decision rights at each level
The technology steering group, or equivalent senior body, holds authority over portfolio strategy and major decisions. This group approves the overall portfolio direction, authorises investments exceeding defined thresholds (£50,000 represents a reasonable threshold for mid-sized organisations), oversees rationalisation programmes affecting multiple business units, and conducts annual strategic portfolio reviews. Membership includes senior leadership from IT and major business functions to ensure portfolio decisions align with organisational priorities.
IT leadership executes portfolio management within approved strategy. This level makes application-level decisions within defined thresholds, applies assessment frameworks to individual applications, conducts quarterly operational reviews, and manages the portfolio inventory and documentation. Decisions at this level include selecting applications to fill identified gaps, approving lifecycle stage transitions, and initiating retirement planning for applications marked for elimination.
Application owners manage individual applications day-to-day. Each application in the portfolio has an identified business owner accountable for its use and value, and a technical owner responsible for its operation and maintenance. Owners handle user support, manage change requests, report issues to IT leadership, and provide input to portfolio assessments. The Technology Decision Rights reference details decision authority allocation.
Review cadence ensures portfolio management occurs continuously rather than as sporadic initiatives. Annual strategic review examines the entire portfolio against organisational strategy, updates capability maps, and sets rationalisation priorities. Quarterly operational review examines application health metrics, tracks lifecycle transitions, and monitors rationalisation progress. Triggered reviews occur when significant events affect portfolio applications: vendor acquisitions, security incidents, major version releases, or organisational restructuring.
Implementation Considerations
Portfolio management implementation varies by organisational context. Organisations with minimal IT capacity require different approaches than those with established IT functions.
For Organisations with Limited IT Capacity
Start with a minimal portfolio inventory capturing the applications that matter most. Focus on applications that handle sensitive data, cost more than £1,000 annually, or support critical business functions. A spreadsheet with 30 essential fields for 50 high-priority applications provides more value than an elaborate tool with empty records.
Use financial records as the primary discovery source. Pull software expenses from the accounting system and work backwards to identify applications. This approach captures paid applications efficiently and establishes cost visibility immediately.
Conduct lightweight assessment using simplified criteria. A three-point scale (keep, review, eliminate) applied to basic questions (is this still used, does it overlap with other tools, is it affordable) produces actionable results without elaborate frameworks. Annual review suffices for stable portfolios.
Combine portfolio management with other IT activities. The person managing vendor relationships naturally tracks application inventory. Security reviews naturally assess application health. Procurement processes naturally trigger portfolio updates.
For Organisations with Established IT Functions
Implement dedicated portfolio management tools that integrate with discovery sources and support workflow automation. ServiceNow, Freshservice, or open source alternatives like iTop provide structured application records, relationship mapping, and reporting capabilities. Integration with identity providers and cloud access security brokers automates ongoing discovery.
Establish formal assessment processes with documented criteria and scoring rubrics. Train assessors to apply criteria consistently. Conduct calibration sessions where multiple assessors score the same applications and reconcile differences. Document assessment rationale for future reference.
Connect portfolio management to enterprise architecture. Capability mapping becomes a living artifact that guides not just portfolio decisions but also solution design and strategic planning. Technology radar exercises identify emerging technologies and sunset declining platforms.
Integrate portfolio decisions with financial planning. Budget processes reference portfolio assessments when evaluating application investments. Cost allocation ties application costs to consuming departments. Business cases for new applications reference portfolio strategy and demonstrate how new acquisitions align with rationalisation objectives.
For Federated Organisations
Federated organisations with autonomous country offices or semi-independent business units face unique portfolio challenges. Each unit may have legitimate reasons for different application choices based on local requirements, connectivity constraints, or regulatory environments.
Establish portfolio governance that distinguishes between applications requiring global standardisation and those permitting local variation. Identity and security applications warrant standardisation for consistent protection. Programme delivery applications might allow variation where local requirements differ materially. Productivity applications benefit from standardisation but may require accommodation for connectivity constraints.
Create shared portfolio visibility across federated units even where application decisions remain local. A consolidated view reveals redundancy opportunities, successful implementations worth replicating, and struggling applications requiring support. Shared visibility enables coordination without requiring centralised control.
Technology Options
Portfolio management tools range from spreadsheets to enterprise platforms. Tool selection should match organisational complexity and available resources.
Spreadsheets work for organisations with fewer than 100 applications and stable portfolios. A well-structured Excel workbook or Google Sheet with defined columns, data validation, and pivot tables supports basic inventory, assessment, and reporting. Limitations emerge as portfolio size grows: maintaining relationships between records becomes cumbersome, multi-user editing creates conflicts, and audit trails require manual discipline.
Open source IT service management platforms include application portfolio modules. iTop (GPLv3 license) provides configuration management database functionality that supports application inventory with relationship mapping. GLPI extends beyond help desk into asset and configuration management. These platforms require installation and maintenance but avoid licensing costs.
Commercial portfolio management tools offer specialised functionality. ServiceNow Application Portfolio Management integrates with the broader ServiceNow platform. Freshservice provides application mapping within its ITSM suite, available through nonprofit programmes at reduced cost. LeanIX and Ardoq specialise in enterprise architecture and portfolio management for larger organisations.
For organisations already using cloud productivity suites, purpose-built solutions within those ecosystems reduce integration burden. Microsoft Lists or SharePoint lists with custom forms can structure portfolio data within existing Microsoft 365 environments. Airtable provides flexible database functionality with relationship support and adequate free tier for smaller portfolios.
Tool selection criteria include: integration with existing IT service management, discovery source connectivity, reporting and visualisation capabilities, multi-user collaboration features, relationship and dependency mapping, and total cost of ownership including implementation and maintenance effort.