Mediabistro logo
job logo

Performance Lead

ValueMomentum, Trenton, NJ, United States


At ValueMomentum’s Technology Center, we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise

We are establishing a Sustaining Engineering team focused on improving performance, stability, reliability, and observability of the Guidewire platform (PolicyCenter, BillingCenter, and integrations).

Responsibilities Engagement Overview

Role Type: Contract - SOW

Work Model: Cross-team, platform-focused

Reporting Structure

Reports into Sustaining Engineering

Dotted line to internal Performance Engineer

Primary Goal: Improve overall system health through root cause identification and measurable validation of fixes

Key Objectives

Support remediation of Guidewire Platform Health findings (performance, integration, stability)

Establish or enhance

Performance and reliability testing strategy

Baseline, measure, and track:

Throughput

Error rates

Batch/job stability and success rates

Validate that fixes result in measurable improvements in system health

Contribute to a repeatable, scalable performance and reliability testing practice

Core Responsibilities Performance & System Testing

Design and execute:

Performance, load, and stress tests

End-to-end system tests across PolicyCenter and BillingCenter

Validate system behavior under normal and peak conditions

Integration & Distributed System Validation

Test and analyze:

API interactions

Asynchronous processing / messaging flows

Identify and diagnose integration failures and inconsistencies

Validate:

Batch job execution and performance

Work queue processing and failure handling

Identify retry issues, bottlenecks, and backlog risks

Analyze:

Application-layer bottlenecks

JVM/thread behavior

API latency and failure points

Observability & Error Analysis

Analyze system logs and monitoring outputs

Identify:

Gaps in logging and monitoring

Recommend improvements to observability

Engineering Collaboration

Partner with developers to address:

Query optimization

Code-level performance issues

Integration reliability

Translate findings into clear, actionable recommendations

Validation & Reporting

Provide before/after metric validation for fixes

Support sprint/release reviews with quantifiable improvementsHelp establish consistent reporting on system health

Required Skills & Experience Technical Capabilities

Strong experience in performance and reliability testing in distributed systems

Hands‑on experience with performance testing tools:

JMeter, Gatling, LoadRunner, or similar

Experience testing system integrations:

APIs (REST/SOAP)

System Analysis Skills

Ability to analyze:

JVM / thread behavior

Database queries and performance

API latency and throughput

High error volumes and log noise

Systemic vs localized issues

Working Style

Strong root cause analysis mindset (not just defect identification)

Ability to translate technical findings into actionable insights

Comfortable working in cross‑team, platform‑level environments

Preferred Qualifications

Experience with

Guidewire (PolicyCenter and BillingCenter strongly preferred)

Insurance domain knowledge (policy and billing flows)

Experience with:

Observability tools (e.g., DataDog)

Background in:

SRE (Site Reliability Engineering)

Systems‑level QA

Success Criteria

The contractor will be successful if they demonstrate:

Measurable improvements in:

System performance (latency, throughput)

Integration reliability (reduced cross‑system failures)

Reduction in error noise and improved observability

Consistent ability to identify true root causes across system layers

Clear contribution to sustained system health improvements

Establishment of repeatable performance and reliability testing practices

Job Description

Work on Performance Test Strategies/Test Plans

Define and Validate Workload Models with the Project Teams

Create Test Case Workflow/Performance Test Scripts

Validate Monitoring tools

Validate Test Environments are ready, available, accurate, etc.

Define, communicate and plan test schedule/windows

Execute test per planned windows and analyze executed tests

Document defects in Microfocus Quality Center

Review Test Results

Create Results Report

Mandatory skills

Experience in Performance testing, monitoring and analysis with excellent reporting Skills

Experience in Test Strategy, Test Design, Test Plan report

Experience in creating performance test scripts using different protocols like Web (HTTP/HTML), TruClient, Web Services, REST and .Net applications

Experience in creating different type of scenarios to simulate different load and users pattern for Stress Test, Peak Load Test, Endurance Test, Volume test and other different type tests

Experience in tools like Loadrunner, Jmeter, AppDynamics, Splunk ( or other APM Tools) for Web/App and DB layers

Good experience on Performance testing of the Web (HTTP/HTML), TruClient, Web Services, REST and .Net applications

Proficiency in one or more general purpose programming languages.

Good understanding of agile methodologies and agile performance testing.

Knowledge of Storm Runner tool

Ability to work in team in diverse/ multiple stakeholder environment

Good Analytical skills

Experience and desire to work in a Global delivery environment

Desired Skills

Working knowledge in debugging of performance scripts, identification of performance bottlenecks using drill down analysis using monitoring tools; Interaction with various stake holders for performance issue resolution

Development performance testing solution for small services in Devops world.

Strong understanding of techniques such as Continuous Integration, Continuous Delivery, Test Driven Development, Cloud Development, application resiliency and security

Experience in working for cloud‑based application performance testing and tuning

Experience in CI/CD pipeline for performance testing.

#J-18808-Ljbffr