As an AWS employee, I adhere to company policies regarding data confidentiality. While I cannot share specific usage metrics or proprietary customer adoption data, I have provided qualitative insights and outcomes to demonstrate the impact of my contributions. These examples align with publicly available information and best practices while respecting internal guidelines.
Migration Hub Strategic Recommendations (MHSR) is one of the most complex modernization tools within AWS. It supports enterprise customers as they evaluate and modernize thousands of applications, many of which contain millions of lines of code and represent significant licensing and operational costs. When I stepped into the design leadership role, the product was evolving rapidly and the team had undergone several transitions in designers, PMTs, and engineering owners.
My responsibility was to maintain the product vision, strengthen customer trust, and ensure design quality across multiple years of development. I guided the experience through three major phases: simplifying migration planning, building transparency in source code analysis, and enabling large scale modernization through machine learning.
Enterprises were struggling to modernize their portfolios because of:
Complex legacy infrastructure with thousands of interconnected systems.
Fragmented tooling that did not reflect how architects and program managers plan modernization.
Lack of transparency about how data was analyzed or how recommendations were produced.
Long timeframes for portfolio assessment, often measured in years.
The experience needed to accelerate modernization while reducing uncertainty, improving trust, and aligning with how enterprise users think and make decisions.
I served as the design leader responsible for guiding the experience strategy, providing continuity across multiple team transitions, and ensuring that every evolution of the product reflected customer mental models, modernization best practices, and AWS experience standards.
Early versions of the product focused on analyzing infrastructure to provide modernization recommendations. I guided the team to simplify the experience by clarifying dependencies, identifying blockers, and surfacing next steps that aligned with enterprise workflows.
Key contributions included:
Defining how customers interpret complex dependency data.
Creating visualization patterns that helped users understand portfolio health at a glance.
Reducing cognitive load by prioritizing essential information and deferring detail appropriately.
This foundation made early planning significantly faster and more understandable for program leaders and solution architects.
As the product expanded to support source code and database analysis, enterprise customers raised concerns about data sharing, privacy, and the inner workings of recommendation logic. I collaborated closely with product, engineering, and research teams to strengthen transparency and build trust.
I led the team to introduce:
Clear workflows that explained how source code was scanned.
Visibility into what data was analyzed and why.
Explainable recommendation logic to reduce uncertainty and increase confidence.
These improvements directly improved enterprise adoption and gave customers the assurance they needed to use the tool with large and sensitive portfolios.
The next evolution of the product focused on using machine learning to automatically group thousands of applications based on dependencies, modernization patterns, and organizational structures. This was particularly valuable for customers managing portfolios of 1000+ applications.
My leadership included:
Defining how machine learning generated groupings should be visualized so customers could understand them intuitively.
Aligning recommendations with the mental models of architects, program leaders, and business stakeholders.
Prioritizing cost benefit, modernization complexity, and licensing outcomes to support decision making.
These improvements helped reduce modernization planning timelines from years to months and allowed customers to estimate licensing savings of up to sixty percent.
Across three designers, two PMTs, and several engineering transitions, I ensured continuity through:
I framed decision making around customer mental models and modernization workflows so the team could make consistent choices even during turnover.
I introduced working cadences, research handoffs, critique structures, and workflow review mechanisms that gave designers predictable ways to collaborate and deliver.
I conducted CX pre checks, mentored designers on how to identify CX risks, and ensured shipping work reflected AWS experience standards for usability, accessibility, and clarity.
I embedded transparency, explainability, and customer control into every AI supported workflow. This ensured customers understood how recommendations were generated and where human oversight was required.
Reduced modernization planning timelines for portfolios with ten thousand or more applications, from years to months.
Increased customer trust in AI supported workflows through transparency in both data usage and recommendation logic.
Maintained design consistency and product vision across multiple years and multiple team transitions.
Introduced scalable UX mechanisms later adopted by other AWS modernization teams, including workflows for explainability and mental model alignment.
MHSR demonstrated the importance of strong UX leadership in long horizon, highly complex programs. By maintaining vision clarity, building trust into every workflow, and anchoring decisions in customer mental models, I helped the team deliver meaningful improvements for some of the largest enterprise modernization customers at AWS. The experience strengthened my belief that UX leadership is most impactful when it creates clarity during change and ensures continuity during multi year transformations.
Fig 1: Strategy Recommendation - Start EC2 workload assessment
Fig 2: Strategy Recommendation - Start EC2 workload assessment
Fig 3: Strategy Recommendation - Start EC2 workload assessment
Fig 4: Strategy Recommendation - Assessment in progress
Fig 5: Strategy Recommendation - Assessment Dashboard
Fig 6: Strategy Recommendation - Assessment Results
Fig 7: Strategy Recommendation - Workload Grouping
Fig 8: Strategy Recommendation - Recommendations at workload level
Fig 9: Strategy Recommendation - Export Recommendations