Dinkar Gupta

Dinkar Gupta


Dinkar Gupta is an Associate Director in Cognizant's Banking and Financial Services (BFS) Technology and Architecture Organization. As a Principal Architect, he heads the strategy, architecture and technology teams assisting global BFS clients. He has a post-graduate degree in computer science and applications and has around 15 years of software development experience across multiple industry segments. He is also an IBM-certified solution designer for Rational Software Architect and is actively engaged in enterprise IT architecture, system transformation consulting and solution delivery roles for BFS customers. Dinkar can be reached at Dinkar.Gupta@cognizant.com


rss  subscribe to this author


Service Point Estimation Model for SOA Based Projects Published: November 27, 2013 • Service Technology Magazine Issue LXXVIII PDF


Estimation in Service-Oriented Architecture (SOA) based projects has been a challenge for IT teams across the globe. As per [REF-1] "Estimating the cost, size and efforts for SOA application is a difficult task due to its diverse nature and loose coupling behavior, which results in an inaccurate estimate to measure the efforts, size and functionality of SOA applications". Service Point Estimation Technique presented in this paper is a newly developed estimation technique to calculate the size of an SOA project and derive efforts and costs for the corresponding service implementation. Based on foundations similar to the UCP & FP estimation techniques, it is a technology agnostic and generic model that can be applied in SOA projects across business solutions or product development initiatives. The basic reason for its design was simple - help IT teams answer the question "How big is this (SOA project) and how much will it cost to implement?"

Given that the notion of service includes many more aspects that just functional logic served by that service, the estimation model considers all 3 aspects of service design – functional complexity, Quality of Service (QoS) expectations and service development environment; while calculating the size of Service at granularity of both service operations and service interface contracts.

In past we have clearly felt the absence of a scientific method to estimate SOA projects in terms of the core unit of these projects – the services. This model is an attempt to fill that gap. The entire portfolio of service development, large scale service migrations and technology transformation programs are expected to get benefitted from the model because SOA is the norm in majority of systems development projects across enterprises.

Background and Context

As is well known, SOA is founded on the notion of Services and each service is the characterized by 3 distinct layers – 1) Policies & Service level agreements (SLAs), 2) Service Interface and 3) Service Implementation, with the first two forming what is generally referred as Service Contract.


Figure 1 - Logical Service Model

Every service supports the business/functional capability assigned to it through a set of well-defined operations that are offered on the interface of the service under the specified SLAs (often called QoS Parameters). These operations consist of a set of messages that are exchanged between the consumers and providers during service invocation.

How the provider implements the service operation at the discretion of the providing system as long as the operation conforms to the service contract published for the consumers. Typically such implementations make use of other fine grained services, internal application data stores, external system interfaces, internal business object/domain model or algorithmic computation to fulfill consumer requests received through the service operation. All of these aspects contribute to the Functional Complexity of the service operation.

Another often ignored but equally important element of any project is the organizational context of the project. This context includes aspects such as team working on project, available business & technology know-how and proper tool support among others. These factors, generally referred as Environmental Factors in estimation parlance must also be closely looked at during SOA project estimations.

Given that there are multiple factors that may impact the complexity of service design and development, it's critical that during estimation of service development efforts a holistic view of service is considered instead of just the implementation or only contract or the technology that's going to be used.

Existing Models and Studies

As the active mainstream development in SOA space started quite late and the enterprise adoption of SOA as at large scale is still in progress, there's been a lack of formal techniques for estimating SOA projects. Estimation techniques such as those based on use case points (UCP), function points (FP), complexity points (CP), COCOMO, Lines of Code (LOC) have been around for some time but most of these were not developed with the contemporary SOA style systems in mind.

During the research on this topic, it was found that only COSMIC FP estimation model offers some concrete guidance in this space (see [REF-2]) but that too is limited to the data exchange related aspects of services. The QoS and environmental aspects (e.g. Governance controls) are not considered in that model. The other attempt is from NICTA (see [REF-3]) but that too is quite abstract (refers to COSMIC FP for most parts) as it more of a guidance on how to approach SOA estimation. Recently, Yuri Marx Pereira Gomes published an article on Using Function Points for SOA estimation (see [REF-4]. The article does an excellent job of explaining how to map elements of SOA projects to FP elements and offers a pragmatic model to estimate SOA projects. However, this approach also falls short on the other aspects such as QoS expectations and environmental aspects of the project.

Despite their limitations these approaches/models provided an excellent starting point to develop a relevant estimation model for SOA projects and therefore these models and the existing work by UCP community has been leveraged to come up with the model presented below.

Service Point Estimation

The approach described in this paper is called Service Point Estimation. To align with well-known concept and terminology, the concept of service point (SP) is borrowed from the existing work done in estimation space.

As defined in [REF-5], a Service is a unit of solution logic to which service-orientation has been applied to a meaningful extent. It is a container for a collection of related functions. These functions are called service capabilities and those exposed via a service contract establish a basic API by which the service can be invoked. In modern day, web service oriented SOA, service capabilities are represented in the form of service operations. Figure 2 provides an example from financial services domain. In this example, capabilities such as creating an order, checking order status and canceling an order are provided as service operations on the service OrderExecution.


Figure 2 - Service & its Operations

The size of the Service Operations and Service in captured in SP units and the efforts are also derived based on the same size. Rest of the paper provides details of how this concept is applied to a service model under estimation.

Estimation Granularity

The model is based on the premise that individual service operation should be the unit of measurement and aggregation of these operations at the service interface level will provide the overall sizing of the service. Once the sizing at such granular level is ascertained it's easier to aggregate the same even up-to the service portfolio being delivered through a project. The efforts though are calculated at individual service interface level because it's generally impractical to have projects delivering only single service operation (although that is just a base case – a service having only one operation).

Model Overview

As mentioned above, the model is based on notion of complexity in the form of function and quality expectations because the core contributors to the operation's complexity are the functional requirements & system quality objectives. Together these contribute (calculation explained later) to the complexity of the service operation. Once the complexity is ascertained for all the concerned operations of the service, the environmental factors are applied (explained later) resulting in the overall complexity involved in development of the service being estimated.


Figure 3 - Service Point as Complexity Measure – Key Components

Functional Complexity

Complexity of a service operation can be stated in the form of certain key measurable parameters that are generally captured. These parameters can be categorized in 3 primary buckets i.e. invocation data, business logic and downstream integration. Here's a brief description of the parameters considered for complexity calculation in each of the categories

Invocation Data

  • Request message size: Number of unique information sets being supplied as input during service invocation
  • Response message size: Number of unique information sets being returned as output (in case two-way MEPs) during service invocation
  • Data translation complexity: The complexity of format and semantics translation that has to be applied before request is processed or response is returned

Business Logic

  • Core business logic complexity: Generally represents the approximated Cyclomatic complexity of the operation as represented in the functional/use case specification of the service
  • Domain objects/entities used: The core business objects/aggregates/entities that are acted upon (impact in some way) due to the service operation
  • Error/Fault handling complexity: Special error handling / reporting needs (if any) to be applied within the business logic while processing data or invoking underlying data / service

Downstream Integration

  • Service invocation complexity: In cases where the service operation invokes other services, this represents the no. of such services involved
  • Data access complexity: In cases where the service operation will act on the underlying data store directly (mechanisms other than services), this represents the nature of such data access
  • Infrastructure access complexity: When invocation of other services in involved, the facilities provided by the middleware platform or integration frameworks is captured through this parameter

During later part of this paper, it's described how these parameters are evaluated and rated to arrive at functional complexity of the service operation.

QoS Objectives

As described earlier, complexity of the service operation is also a factor of the non-functional requirements that are specified for the service. These requirements are referred as QOS objectives in this model. Just like the functional complexity parameters, the QoS objectives are categorized into 2 major categories i.e. operational objectives and implementation objectives. Here's a brief description of the parameters considered for complexity calculation in each of the categories

Operational Objectives (Runtime aspects)

  • Response Time: Time taken by the service operation to response/acknowledge the invocation of that operation
  • Data Load: Size of the data volume to be handled by the operation during an invocation
  • Concurrency: Does the service operation need to support invocation by multiple clients concurrently?
  • Scalability: Need to support workload growth or in other terms ability to handle the increased workload (requests/transactions/messages per unit of time)
  • Availability: Expectations regarding service uptime and downtime/recovery in case of failure
  • Security: Does the service need to support special security measures in the areas of access control, permissions or invocation tracking?
  • Monitoring: Special requirements if any in terms of runtime monitoring & control of the service operation (or its configurations) on the hosting infrastructure.

Implementation Objectives (Development aspects)

  • Interaction/Interface Type: The interface type(s) to be supported for the service operation i.e. RPC, messaging, resource, bulk etc.
  • Reusability: Expectation regarding the reuse of the service operation in other service/process compositions.
  • Testability: Testability expectations of the service operation. Depending on the complexity and number of dependencies of the operation (on infra/other services/data etc.), this may or may not be possible

Integration Participants / Integration Actors

An extremely important element of any estimation model is the actors involved in the system scope being estimated. In case of SOA, there are no direct "end users" which can be counted for in this category. Instead the system actors are the ones which participate in service oriented integration as service, data or resource providers for the operations being estimated.

For this model, 3 major actors have been identified. These include a database interface, another service interface and a proprietary API/adapter (in increasing order of complexity)

  • DB Interface: The service operation's business logic accesses a database directly through technologies such as ADO.NET, JDBC, O/R mappers etc.
  • Service Interface/API: Service operation's contains business logic that invokes other services through their well-defined/published interfaces or APIs
  • Proprietary APIs: The service operation needs to integrate with a COTS product or custom software systems providing its own API or technology adaptors thus requiring special coding/design to integrate.

Project Execution Environment

One final piece of puzzle in complexity identification is the environment (project context) specific aspects that have an impact on how much effort needs to be spent on during the lifecycle of the project. While the aspects described so far apply to the service operation, the environmental aspects impact of the overall project and not just service operation. We have seen following aspects impacting most SOA projects (some are applicable generally to all projects).

  • Business Domain Knowledge: Good Service oriented development requires the teams to have good knowledge of business being supported by the services. This parameter depicts the level of business domain knowledge that exists within the team to be deployed on the project
  • Business Domain Knowledge: Good Service oriented development requires the teams to have good knowledge of business being supported by the services. This parameter depicts the level of business domain knowledge that exists within the team to be deployed on the project.
  • Technology Domain Knowledge: On similar lines, this parameter indicates the level of relevant technical domain knowledge that exists within the team.
  • Implementation Paradigm Knowledge: Depending on the implementation paradigm chosen for the service implementation (OO etc.), this indicates the relevance and existence of the paradigm knowledge within the development team.
  • Requirement Stability: It has been observed and proven time and again that stability of requirements drive project schedules and efforts on ground, whether it's traditional waterfall approach or agile. This parameter is used to indicate the criticality and estimation of the stability expected to be experienced in the project.
  • Team Dynamics: As service oriented architecture based development projects are inherently integration heavy projects, is imperative to consider the team dynamics and collaboration even while estimating. This parameter indicates that aspect.
  • Service Governance Controls: The level of service lifecycle governance controls also impact the effort that the teams have to spend in spec creations, design and review of the service interface, implementation and monitoring. This parameter signifies this very quality control aspect of the SOA projects.
  • Service Delivery Tool Support: While on ground, the productivity of the team (especially large scale) is directly proportional to the tools that the team employs in the development, build and deployment of the services. Such automation support aspect is indicated by this parameter.

Please note that the parameters in each of these categories are by no means exhaustive nor do these represent all possible elements involved. However, in the experience of the author these are most often encountered factors that are analyzed during estimation in the enterprise SOA projects.

Estimation Approach

In order to arrive at the estimates for a project that is expected to deliver a certain portfolio of services within a given domain/segment, we recommend a 4 step process to be followed - Complexity Identification, Project Sizing, Effort Calculation and Cost Calculation.


Figure 4 - Service Portfolio - Development Effort Calculation Process

Complexity Identification

As a first step, the complexity for each of the operations being offered on service interface needs to be identified. All complexity parameters for functionality, quality of service and integration participant categories are considered in this stage.

Within each category, the simple complexity scoring model is applied. For each parameter, a complexity valuation scheme is defined. This includes 3 possible options for the complexity range of that parameter, a relative weightage for the parameter and an estimator's rating for that parameter (to be selected during estimation). Based on the weight and the rating selected, a simple weighted complexity value is identified for the parameter. The following figure depicts this for one of the parameters

Request Message Size (Dataset = Unique information set being exchanged over service invocation E.g. order, trade, instruction)
Valuation Definition / valuation aid Value Weightage Total
1 Request DatasetSize <= 1 3 1 3
2 2<= Request DatasetSize <= 3
3 Request DatasetSize > 3

Figure 5 - Functional complexity scoring (per complexity factor)

In the same manner all functional complexity factors are rated and finally a functional complexity score is computed. Figure 6 provides a sample of computation

Functional Complexity Calculation
Request Message Size (Dataset == Unique information set being exchanged over service invocation E.g. order, trade, instruction)
Valuation Definition / valuation aid Value Weightage Total
1 Request DatasetSize <= 1 3 1 3
2 2<= Request DatasetSize <= 3
3 Request DatasetSize > 3
Response Message Size (Dataset == Unique information set being exchanged over service invocation E.g. order, trade, instruction)
Valuation Definition / valuation aid Value Weightage Total
1 Request DatasetSize <= 1 3 1 3
2 2<= Request DatasetSize <= 3
3 Request DatasetSize > 3
Request Response Data Translation Complexity (format and semantic translation)
Valuation Definition / valuation aid Value Weightage Total
1 No Data translation required 3 2 6
2 One way translation required only (request or response)
3 Two way translation required (request and response)
Core Business Logic Complexity (Cyclomatic Complexity (CC) at functional spec granularity)
Valuation Definition / valuation aid Value Weightage Total
1 Simple (CC <= 2) 3 3 9
2 Medium (3 < CC <= 5)
3 Complex (CC >= 5)
Domain Objects/Entities to be Manipulated in Business Logic (product, client, instrument, other ref data etc..)
Valuation Definition / valuation aid Value Weightage Total
1 Domain Objects <= 2 3 2 6
2 3<= Domain Objects <= 5
3 Domain Objects > 5
Data Access Complexity (Within boundary of business sub domain hosting the service)
Valuation Definition / valuation aid Value Weightage Total
1 Does not require data access OR performs one time data read only (no writes) 3 2 6
2 Performs more than one data read OR write
3 Need to perform multiple data read and write operations
Other Service Invocation Complexity
Valuation Definition / valuation aid Value Weightage Total
1 No other service call involved (atomic service operation) 3 2 6
2 Dependencies on enquiry service calls (queries/data fetch calls)
3 Dependencies on transactional service calls (commands, value changing service calls)
Fault/Error Handling Complexity
Valuation Definition / valuation aid Value Weightage Total
1 No Special error handling requirements 3 1 3
2 Errors must be reported within the service scope (logging response of service)
3 Errors must be reported to external system / interface through provided interface
Integration Infrastructure Access Complexity
Valuation Definition / valuation aid Value Weightage Total
1 Infrastructure abstraction available through well defined functional API (app platform support) 3 1 3
2 Infrastructure abstraction available through technical API only (e.g. JMS, Apache CXF API)
3 Infrastructure needs to be invoked through custom APIs/product specific interfaces
Functional Complexity Score45

Figure 6 - Functional Complexity Calculation for a Service Operation

In the model, the score may range from 15 (functionally simple operation) to 45 (functionally complex operation). The complexity is represented in the form of Functional Complexity Factor (FCF) and that's determined based on the complexity rating range to which the operation belongs (see Figure 7). In the figure, SWF for the operation is identified as 7 based on the complexity rating of 45 (see Figure 6). In order to simplify model, we recommend a weightage scheme of 3, 5 and 7 for simple, average and complex operation.

Computing Operation's Functional Complexity
Service Operation Type Description Weightage # of Service Result Services
Simple 15 >== Service Point >== 25 3 0 0
Average 26 >== Service Point >== 35 5 0 0
Complex 36 >== Service Point >== 45 7 1 7
Functionality Complexity Factor (FCF) 7

Figure 7 - Functional Complexity Factor Calculation

Additionally, the integration participants involved in the service operation are identified and complexity for each of these is identified. The result is availability of Integration Complexity Factor (ICF) as depicted in Figure 8.

Computing Integration Participants Complexity
Service Operation Integration Actors Type Actor Notes/ Remarks
1 Pricing Engine (COTS) Custom Custom TCP based interface
2 Reference Data Service Service Enterprise Ref data service on ESB
3 Product Database DB Local DB within the domain
Integration Types Description Weightage # of Actors Results Actors
Simple DB Interface 1 1 1
Average Service Interface 2 1 2
Complex Custom / Proprietary API or Adapters 3 1 3
Integration Complexity Factor (ICF) 6

Figure 8 - Integration Complexity Factor Calculation

Together both FCF & ICF help us get the first measure of the complexity for the service operation under consideration. Following the terminology of existing estimation models, we term this measure as Unadjusted Service Point (USP). The adjustments referred here being the technical complexity factor explained below.

The third contributor the complexity, i.e. the technical complexity parameters are also rated in the manner similar to functional parameters, although in a slightly different manner. The weightages are assigned to each of the parameters and the estimator selects a value (from range 0 – 5 in the order of relevance) for each of these. This result is factor rating for each of the parameters/factors. As sum of the ratings is the Technical Complexity Score (TCS) for the operation.

Estimating Technical Complexity Factor (TCF)
Description Weight Value
T1 Interaction/Interface Type Objective 0.5 3 1.5
T2 Response Time Objectives 1 3 3
T3 Data Load Objectives 1 3 3
T4 Concurrency Objectives 1 3 3
T5 Scalability Objectives 1 3 3
T6 Availability Objectives 0.5 3 1.5
T7 Security Objectives 0.5 3 1.5
T8 Reusability Objectives 1 3 3
T9 Testability Objectives 1 3 3
T10 Service Monitoring Objectives 0.5 3 1.5
Technical Complexity Score (TCS) 24
Technical Complexity Factor (TCF) = 1.0 + (0.01 * TCS) 1.24

Figure 9 - Technical Complexity Factor Calculation

The Technical Complexity Factor (TCF) is a function of number of QoS factors/parameters and impact these factors may have on the development effort to be applied in the project. Based on this, we have used the following formula to arrive at the technical complexity factor

TCF = 1.0 + (0.01 * TCS)

Due to lack of historical data at the time of model development we have assumed uniform impact for each of the parameters (0.01).

Environment Complexity Factor

The Environment Complexity Factor (ECF) is also calculated in manner similar to the TCF. All the complexity parameters are scored and the result is a weighted rating of the complexity.

Estimating Environmental Complexity Factor (ECF)
Factor Number Description Weight Value
(Scale 0-5)
E1 Business Domain Knowldge 1 3 3
E2 Technology Domain Knowledge 1 3 3
E3 Implementation Paradigm (e.g. 00) Knowledge 0.5 3 1.5
E4 Requirements Stability 0.5 3 1.5
E5 Team Dynamics 0.5 3 1.5
E6 Service Governance Controls 1 3 3
E7 Tool Support for Build and Deployment 1 3 3
Environmental Complexity Score (ECS) 16.5
Environmental Complexity Factor (ECF) = .07 + (0.01 * ECS) 0.865

Figure 10 - Environment Complexity Factor Calculation

Similar to the technical complexity factors a weight and value is applied relative the importance of the environmental factor. Based on the calculated Environment Complexity Score (ECS), we have used the following formula to arrive at the technical complexity factor

ECF = 0.7 + (0.01 * ECS)

As in the case of technical complexity factor, we have assumed uniform impact for each of the parameters (0.01) due to lack of historical data at the time of model development.

Project Sizing

Once the complexity of the service operations is known, we can calculate size by first calculating the size of the service operation (Service Operation Size or SOS) and then aggregating the same upwards for service interface and service portfolio. All size measurements are defined in terms of Service Points (unit of sizing, like the Use Case Point)

The size of the service operation is simply a product of USP and TCF represented as


Given this, the size of the service interface (Service Interface Size or SIS) is represented by


In case a larger portfolio of services is part of project, the Project Size (S) is a simple sum of SIS within the portfolio


To summarize, the sizing calculation is essentially carried out in detail for lowest unit only i.e. the Service operation. All the size calculations are aggregation of the same till the entire portfolio of services is covered. The final size calculated is represented in units of Service Points.

Effort Calculation

Effort calculation post sizing of the project is a long-known and simple technique and this model also follows the same instead of creating something new. According to this technique, in order to calculate the effort required to implement a project of given size, one needs to know the productivity of the project team. In case of SOA projects, this means we want to find out how many service points can be delivered per man-hour/days by project team proposed for the project.

Given, Productivity is represented as (assuming Person Days or PD as the unit or time measurement)

P = PD per SP

The overall service development effort (E) is calculated as

E = P*S

As an example, if the productivity of the team is 5 PDs of effort per SP and the Size of project (S) is 50 SPs, the effort required to execute the project would be 250 PDs.

An important aspect of effort calculation is the fact that there's more to development in SDLC phases that just development (often referred as CUT or Coding and Unit Testing). Mostly it's been observed that a standard approach to just split the development effort across requirements, design, coding, testing, support phases is not effective (although essential) for SOA projects. Instead a more pragmatic activity based effort split gives a better picture of where and how-must effort we will have to spend during the entire project.

In this model, we propose a similar activity based effort split approach to arrive at overall effort. In this approach, once the service development effort is calculated, project activities are identified in 2 major categories

  • Development Activities: Includes activities such as requirement specification, contract design, service implementation design and CUT (coding and unit testing)
  • Test Support and Infrastructure Setup Activities: Includes support that the team has to provide for Infrastructure setup and configuration, integration testing, load & performance testing and the acceptance testing

Given these set of activities, we apply a distribution across these activities, relative to the computed service development effort. The following figure depicts a sample scenario where the service development effort is distributed across both categories mentioned above. Applying this distribution, we get the Net Effort for the project. The Project's overall delivery effort (PDE in figure below) is calculated by adding the necessary efforts for post-production support, contingency and management. The distribution applied for these activities is relative to the Net Effort.

Final Service Development Effort (SDE) 51.00 Person Days (PDs)
Activity Wise Break-Up
Activities Split Effort (PDs) Remarks
Development Activities (distribution of 100% of Service Development Effort)
Service Requirement Specification 5% 2.55 5% of service development effort
Service Contract Design 5% 2.55 5% of service development effort
Service Implementation Design 10% 2.55 15% of service development effort
Construction and Unit Testing (CUT) 80% 40.80 80% of service development effort
Test Support and Infrastructure Setup Activities
(additional, relative to 100% of Service Development Effort)
Infrasructure setup and configuration Support 5% 2.55 Differs, example here 5% of SDE
Integration Testing Support 20% 10.20 Differs, example here 20% of SDE
Load and Performance Testing Support 10% 5.10 Differs, example here 10% of SDE
Acceptance Testing Support 10% 5.10 Differs, example here 10% of SDE
Net Effort (NE) 68.85
Post Production Support 5% 3.44 Differs, example here 5% of SDE
Management, Coordination & Oversight 10% 6.89 may differ, typically 10% of NE
Contingency 10% 6.89 may differ, typically 10% of NE
Total Project Delivery Effort (PDE), Rounded 86.00 Person Days
Typical efforts. To be adjusted based
Calculated Values

Figure 11 - Delivery Effort Calculation based on project scope and activities

As depicted, the service development effort of 51 PD actually translates to 86 PDs of overall Project Delivery Effort, when all other activities & delivery aspects are taken into consideration for the estimation. This means that the core development activities taken up by developers account for approx. 60% of the Project Delivery Effort (approx. 75% of Net Effort) and therefore in order to avoid project surprises, it's essential to explicitly consider estimates for other activities too.

Cost Calculation

Once the effort estimates for the project are available, the overall cost (C) calculation is a function of hourly/daily rates (R) that the organization charges for the work. This essentially means

C = E*R

This simple approach to cost calculation is well established and known to almost all teams so it's not explained further here.

A slight pragmatic variation of the scheme is observed while project's work location split is applied to the calculation. The location split essentially represents that ratio of team split across 2 or more locations of project execution. In the enterprise IT service provider organizations, onsite: offshore is a typical location split applied to projects. In such cases, different rates are applicable for different locations.

Let's extend the example in last section for cost calculation. Assume that the project requiring 250 PDs of effort will be executed between two locations - New York (onsite) and Bangalore (offshore) with respective rates for both locations being $1000 and $300 respectively. Assume further that 80% of the team will be located in Bangalore while rest 20% in New York. Given this, the cost of the project is calculated as

Table 1 - Project Cost Calculation (location split scenario)

Total Effort (PDs) 250
Location Location Split (%) Location Daily Rate ($) Effort Split per Location (PDs) Overall Cost ($)
Bangalore 80 300 200 60,000.00
New York 20 1000 50 50,000.00
Total Project Cost ($) 1,10,000.00

Another variation that's applied is the graded rates variation. In this case, instead of location, the graded resourcing variation is applied to the team being proposed for the team. Finally, hybrid model including both location and grade split is also employed. Both are known methods in IT space and thus not explained further in this paper.

Variation Points

While there are some elements of the model that are applicable across project estimation scenarios, there are some that will definitely tend to differ across projects and organizations. These variations are expected to be encountered through

  • Weightages and relative importance values within the functional, QoS and environment complexity calculation
  • Project activities breakup and effort split distribution relative to service development and net effort
  • Productivity Factor

The model provides necessary support to implement these variation points while applying this for different scenarios.


Estimating SOA projects has been a challenging topic. There are specific considerations for functional complexity of services, QoS expectations from these services and the environment in which the services are developed. The model presents an approach which is a blend of experience in executing SOA projects and well-known estimation techniques that have stood the test of time in enterprise IT projects. With possibility to adapt the model to specific preferences of the project team, we have a tool that helps us get a handle on the size of SOA project in hand and derive an estimate of the effort required to deliver it.


[REF-1] Integration Efforts Estimation in Service Oriented Architecture (SOA) Applications http://www.iiste.org/Journals/index.php/IKM/article/download/747/648

[REF-2] Guidelines for SOA Projects http://www.cosmicon.com/portal/dl_info.asp?id=124

[REF-3] A Framework for Scope, Cost and Effort Estimation for Service Oriented Architecture (SOA) Projects http://www.nicta.com.au/pub?id=1579

[REF-4] Functional Size, Effort and Cost of the SOA Projects with Function Points http://www.servicetechmag.com/I68/1112-4

[REF-5] SOA Glossary at http://serviceorientation.com/soaglossary/service