SerialReads

Problem-Solving Skills: A Comprehensive Practitioner’s Guide

May 10, 2025

Great. I’ll begin compiling a comprehensive practitioner’s guide that explores theories, methodologies, and tools to enhance problem-solving skills, tailored to intermediate-to-advanced engineers and the general public with a focus on software domains.

The guide will be structured around your detailed scope and include references at the end for deeper exploration. I’ll let you know once the research is complete.

Problem-Solving Skills: A Comprehensive Practitioner’s Guide

Definitions and Historical Context of Problem-Solving

Problem-solving is generally defined as the cognitive process of identifying an issue or challenge, devising potential solutions, and executing those solutions to achieve a goal. It is considered a higher-order thinking skill that has been studied across various fields, including psychology, computer science, and engineering. Over time, different perspectives have shaped our understanding of problem-solving:

By understanding these historical and cross-disciplinary contexts, we see that problem-solving can be viewed both as a structured methodology and an art. It involves cognitive structure (perception, memory, logic), but also creativity, intuition, and domain-specific knowledge. Modern problem-solving frameworks often blend these perspectives – using structured steps and computational tools (as in engineering and CS) while also encouraging insight, creativity, and user-centric thinking (as in cognitive psychology and design fields).

Core Cognitive Processes in Problem-Solving

Human problem-solving is underpinned by several core cognitive and psychological processes. Understanding these can help us become more aware of how we solve problems and how to improve those abilities. Key processes include:

In summary, effective problem-solving requires not just domain knowledge but also self-awareness of our cognitive processes. By training our attention (to focus on relevant details), expanding memory/knowledge (through learning and experience), sharpening reasoning skills (practicing logic, math, etc.), and managing decision biases, we improve our ability to tackle problems. The next sections will introduce formal methodologies and tools that leverage these cognitive processes while providing structured approaches to solving problems.

Methodologies for Problem-Solving in Practice

Over the years, practitioners and theorists have developed formal frameworks and methodologies to guide problem-solving. These range from high-level heuristics to detailed, step-by-step processes. Here we analyze several influential methodologies, each with its strengths, limitations, and use cases (especially in software and engineering contexts):

Polya’s Four-Step Problem-Solving Method (Heuristic Approach)

One of the classic frameworks for approaching problems comes from mathematician George Pólya, who outlined a general-purpose method in his 1945 book "How to Solve It." Pólya’s approach is simple but powerful, and it can be applied beyond mathematics to almost any problem:

  1. Understand the Problem: Clarify what is being asked. Identify the unknowns, data, and conditions. Paraphrase the problem to ensure you comprehend it fully. Pólya noted that many students rush this step, leading to failure simply because they misunderstood the actual problem.
  2. Devise a Plan: Think of possible strategies to solve the problem. This could involve drawing a diagram, looking for patterns, simplifying the problem, using an analogy, working backward from the goal, etc. Pólya provided a rich list of heuristic strategies (guess-and-check, make an equation, consider special cases, etc.) to consider. The key is to choose an approach that seems promising.
  3. Carry Out the Plan: Execute your chosen solution strategy carefully and step-by-step. This might involve calculations, constructing something, or implementing code. Remain patient and precise, and if the plan isn’t working, acknowledge that and return to step 2 to try a different plan (it’s normal to iterate).
  4. Look Back (Review/Extend): Once you have a solution, verify that it indeed solves the problem and examine the result critically. Ask, “Why did this work?” and “Can this solution be improved or extended?”. This reflective step consolidates learning; by analyzing what worked, you become better at tackling future problems.

Strengths: Pólya’s method is straightforward and teaches a problem-solving mindset that emphasizes understanding and reflection. It’s particularly useful for intermediate engineers or students because it provides a clear scaffold to follow. In software engineering, developers often implicitly follow these steps when debugging (understand the bug, propose a fix, implement it, test and review). It’s flexible: the method doesn’t prescribe what specific plan to use, only that you should have one, which encourages creativity and adaptability. Empirical studies in education have found that teaching using Pólya’s framework can improve students’ mathematical problem-solving performance. In one study, a class of high school students trained in Polya’s method saw their average test scores rise from about 68% (unsatisfactory) to 75% (fairly satisfactory) on word-problem solving, indicating a notable improvement after applying the structured approach.

Limitations: Because it’s so general, Pólya’s framework doesn’t automatically give you domain-specific techniques. Novices might struggle with the “Devise a Plan” step if they lack knowledge of common strategies in that domain. For instance, a new programmer might understand a task but not know the algorithmic patterns to solve it. The method also assumes a fairly well-defined problem to start with; for very fuzzy problems, you might not even know the goal clearly (there Pólya’s method might need to be preceded by problem definition efforts). Nonetheless, even in ill-defined problems, ensuring you truly “Understand the Problem” is a critical first step.

Use in Software Engineering: Polya’s method can be directly mapped to software problem-solving. Example use cases:

In all these cases, Polya’s emphasis on understanding and reflection helps engineers avoid hasty patches and promotes deeper learning from each incident.

TRIZ (Theory of Inventive Problem Solving)

TRIZ is a methodology developed by Soviet engineer Genrich Altshuller and colleagues starting in 1946, aimed at systematic innovation. Rather than relying on random brainstorming, TRIZ is built on the premise that many problems across industries share common patterns of solutions. Altshuller analyzed hundreds of thousands of patents to distill these patterns. The result was a set of principles and tools to guide inventors in solving technical problems creatively.

Key elements of TRIZ include:

Strengths: TRIZ provides a rich ideation toolkit. It shines in inventive design and engineering problems where you’re looking for breakthrough solutions (for instance, product design, manufacturing processes, or resolving technical bottlenecks). Instead of brainstorming blindly, TRIZ guides you to think along proven solution principles. This can be very powerful in engineering domains – it has been used in mechanical design, aerospace, automotive engineering, etc., to solve problems that stumped engineers for a long time. TRIZ’s approach is systematic and knowledge-based, which appeals to analytically minded teams. It also encourages overcoming psychological inertia – by focusing on abstract principles, you might discover a solution used in a very different field that you can adapt to your problem.

In software engineering, although TRIZ originated in mechanical/electrical domains, its principles can still apply. For example, Principle #2 “Taking out” (isolating an interfering part) could translate to isolating a faulty microservice in a distributed system. Principle #10 “Preliminary Action” (performing required changes before they are needed) is analogous to lazy loading or prefetching in software to improve performance. Some practitioners have mapped the 40 TRIZ principles to software design patterns and problems, finding analogies such as using inheritance or polymorphism (software concepts) as instances of certain TRIZ principles for flexibility.

Limitations: TRIZ can be overwhelming for newcomers – there is a lot of jargon (contradictions, inventive principles, function analysis, etc.) and it typically requires training to apply effectively. It’s not as straightforward as something like Polya’s four steps; rather, it’s a compendium of strategies. For less technical or more human-centered problems, TRIZ might feel too rigid or not directly applicable. Also, generating the “right” contradiction formulation for a problem can be challenging (expressing your problem in terms of one parameter improving and another worsening). In software, some critics argue that TRIZ doesn’t map neatly onto problems where human factors or rapid iteration are in play, since software issues often involve user experience or team dynamics that TRIZ was not originally designed for.

Use Cases in Software/Tech: TRIZ has seen use in areas like software architecture and process improvement:

In summary, TRIZ brings a patent database of human ingenuity to your fingertips. It’s most beneficial for advanced problem-solvers (engineers and inventors) looking for out-of-the-box solutions rooted in technical logic. When combined with domain knowledge, TRIZ can produce truly novel solutions.

Root Cause Analysis (RCA)

Root Cause Analysis is not a single method but rather an umbrella term for techniques used to find the fundamental cause of a problem so that it can be fixed or prevented permanently. The mindset here is to avoid treating symptoms and instead identify why a problem occurred in order to address that root cause.

RCA is widely practiced in engineering, quality assurance, IT operations, healthcare, and many other fields. Some common approaches and tools under RCA include:

Strengths: RCA’s biggest benefit is that it leads to permanent fixes and process improvement. By finding the true root cause, you can implement changes that prevent the problem from recurring, which is far more efficient long-term than repeatedly fixing symptoms. RCA also often uncovers broader issues (e.g. systemic issues in an organization’s workflow, design flaws, or hidden assumptions) that, when corrected, improve overall quality and reliability. In software engineering, adopting RCA in the form of post-incident reviews and bug root cause analysis has been shown to drastically reduce repeated outages and improve uptime. RCA is also straightforward and intuitive – techniques like 5 Whys can be taught to any professional and quickly become a habit whenever something goes wrong.

Limitations: A challenge with RCA is that not every problem has a single root cause – many are multifaceted. Chasing a singular root cause can sometimes oversimplify complex issues (for example, a project failure might be due to a combination of technical, organizational, and market factors, not just one root cause). Also, RCA is reactive (after-the-fact) unless combined with proactive analyses like FMEA. In fast-paced environments, teams may feel they lack time for thorough RCA on every incident (though the counter-argument is that failing to do RCA allows more incidents to happen). Another limitation is that the effectiveness of RCA is only as good as the honesty and completeness of the analysis. If a culture is defensive, the “root cause” might be superficially identified as operator error, missing deeper process issues. Hence the emphasis on blameless culture in DevOps, for instance, to get genuine answers rather than finger-pointing.

Use Cases in Software Engineering: RCA is extremely relevant in software and IT:

Overall, RCA methodologies ensure that every failure becomes a learning opportunity. In complex software systems, embracing RCA leads to robust systems and a culture of continuous improvement.

Design Thinking

Design Thinking is a human-centered problem-solving methodology widely used in product design, user experience (UX), and business innovation. It gained prominence through IDEO and the Stanford d.school and has since been adopted across industries (including software companies, service design, etc.). Design thinking emphasizes understanding the user’s needs deeply and iterating through creative solutions. The process is often depicted in 5 (or sometimes 6) stages:

  1. Empathize: Research and observe users to understand their experiences, needs, and pain points. This may involve interviews, shadowing, surveys, etc. The goal is to set aside your assumptions and truly see the problem from the user’s perspective. In software, this could mean watching how real users interact with your application to gather where they struggle.
  2. Define: Synthesize the findings from empathy work into a clear problem statement or point-of-view that describes the core problem to solve (often in terms of a user need). For example, instead of “we need to increase engagement on our app,” a human-centered definition might be “Busy parents need a quicker way to log school events on the app, because they currently find the process too time-consuming.” A well-framed problem statement focuses on user needs and insights, and guides ideation by providing focus.
  3. Ideate: Brainstorm and generate a wide range of ideas for solutions. In this phase, quantity is valued over quality – the aim is to explore lots of possibilities, encouraging wild ideas and deferred judgment. Techniques include brainstorming sessions, sketching, mind mapping, and worse-case scenario ideation. By involving cross-functional team members, design thinking leverages diverse perspectives. For example, a software team ideating might include developers, a UX designer, a customer support rep, each contributing different ideas. This stage embraces creativity and “thinking outside the box.”
  4. Prototype: Take one or more of the best ideas and build a prototype – a low-fidelity, inexpensive version of the solution that can be a model, a sketch, a storyboard, a clickable interface demo, etc.. The prototype should be just detailed enough to gather feedback. In software, a prototype might be a mock-up UI or a simplified beta feature. The idea is to create something the team and users can interact with.
  5. Test: Try out the prototype with users (or stakeholders) to observe how well it solves the problem and gather feedback. Testing often reveals new insights – perhaps users use the solution in a surprising way, or maybe it doesn’t actually solve the defined problem as thought. The team then uses this feedback to refine the solution or even reframe the problem, iterating through the cycle again if needed. Design thinking is inherently iterative; it’s common to cycle through prototype->test->empathize again (hence the process is sometimes drawn as a loop or a set of repeating cycles).

Design thinking is often summarized by the mantra “iterate toward the solution” and the principle “fail early, fail often” – meaning it’s better to catch a flawed concept at prototype stage via user feedback than after full implementation.

Strengths: Design thinking’s user-centric focus ensures that solutions are grounded in real user needs, which increases the likelihood of adoption and success. It’s particularly powerful for ill-defined or open-ended problems (e.g., designing a new app feature, improving a customer journey, creating a business strategy) where understanding human behavior is key. By encouraging divergent thinking (in ideation) and then convergent thinking (in selecting and prototyping ideas), it balances creativity with practicality. Another strength is that it fosters cross-disciplinary teamwork and innovation. Many organizations have reported significant improvements by applying design thinking – for example, IBM trained thousands of employees in design thinking and found that teams got products to market twice as fast and with greater alignment, ultimately achieving over 300% ROI according to a Forrester study. The approach can breathe life into stagnating products, as seen in the famous Airbnb story where the founders applied a design thinking mentality (specifically, doing something non-scalable to empathize with users) which turned their failing startup into a billion-dollar business (more on that in Case Studies). Design thinking also helps to mitigate risk: by testing prototypes with users, you catch flaws early and avoid costly full-launch failures.

Limitations: One critique is that design thinking can be time-consuming – multiple rounds of research, prototyping, and testing require effort and buy-in. In fast-paced agile teams, some see it as potentially slowing down delivery (though it can be integrated into agile sprints in practice). There’s also a risk of superficial adoption – some teams conduct a single workshop and claim “we did design thinking,” without truly embracing iterative user testing or failing to push into truly creative territory (the so-called “design thinking theater”). For very technical problems where human users aren’t central (say optimizing an algorithm’s performance), design thinking might not add much value compared to analytic approaches. Another limitation is that it requires access to users for testing; if a team can’t easily get user feedback, the iterative loop suffers. Lastly, focusing heavily on user present needs can sometimes neglect strategic foresight – users might not envision radical innovations they’ve never seen, so breakthroughs sometimes require going beyond what users say (Henry Ford quip: “If I asked people what they wanted, they’d say a faster horse”). Good design thinking practitioners balance current user insight with vision.

Use Cases in Software and Business:

Design thinking complements other problem-solving methods by injecting a heavy dose of customer and user perspective, which can be especially refreshing in software projects that risk becoming too inwardly focused on technology rather than the people using it.

Systems Thinking

Systems Thinking is an approach for tackling complex, interconnected problems by viewing them holistically – as parts of an overall system – rather than in isolation. It’s particularly useful for “wicked problems” (those that are ill-defined and have many interdependencies) and in understanding large-scale or organizational challenges. At its core, systems thinking encourages problem-solvers to consider the broader context: the relationships, feedback loops, and dynamics that influence the system’s behavior.

Key concepts in systems thinking include:

Strengths: Systems thinking excels in problems that are complex, interconnected, and dynamic – scenarios where linear thinking fails. It helps prevent siloed solutions that fix one part of a problem but inadvertently cause another problem elsewhere. By encouraging a broader view, it often leads to more sustainable, long-term solutions. For example, in environmental issues or global supply chain problems, systems thinking is essential to avoid short-sighted fixes. In corporate strategy, systems thinking can reveal how different departments’ goals might be in conflict and causing inefficiencies (the classic “local optima vs global optimum” issue). Applying systems thinking can result in aligning incentives and processes across an organization. It’s also critical in software architecture: treating a distributed system as a whole can uncover emergent issues like cascading failures, which you wouldn’t notice if you only ever looked at each microservice independently. Systems thinking tools (like system dynamics modeling and simulations) allow “what-if” experimentation – you can simulate how a policy change might ripple through a system before implementing it.

Limitations: Systems thinking can be abstract and complex; it sometimes feels like boiling the ocean, because “everything is connected to everything”. For a practitioner under time pressure, doing a full causal loop analysis or stock-and-flow model might be impractical. There’s also the challenge of bounded rationality – no one can perfectly model an entire system with all its details, so we make simplified models that might miss some aspects. If those missing aspects are important, our solutions might miss the mark. Another limitation is that communicating systems insights can be hard – diagrams of feedback loops are not intuitive to everyone, so gaining buy-in for systemic changes (which might cut across silos or require long-term thinking) is a leadership challenge. Finally, systems thinking tends to highlight that there are no simple root causes – which, while true, can make it hard to decide on concrete action. It often needs to be paired with decision frameworks to choose interventions.

Use Cases:

Systems thinking encourages thinking long-term and big-picture. For engineers and the general public alike, it’s a reminder that many of our toughest problems (climate change, organizational transformation, legacy system overhauls) aren’t linear – they require understanding interdependencies. A practical takeaway for everyday problem-solvers is to occasionally step back and ask: “Am I optimizing one part of this system at the expense of another? What are the side effects?” That question itself is a systems thinking lens.

Empirical Studies: What Works in Problem-Solving?

Beyond theory and methodology, it’s important to ask: what techniques actually improve problem-solving effectiveness, according to research? Over decades, cognitive scientists, educational researchers, and industry studies have investigated problem-solving in controlled and real-world settings. Here we summarize and critique key findings from empirical studies, focusing on results that practitioners can use because they are reproducible and grounded in evidence:

In critique, many empirical studies on problem-solving are context-dependent, and there is sometimes conflicting evidence. Human problem-solving is complex, so what works in one scenario (e.g., a hackathon) might not in another (troubleshooting an emergency). Nonetheless, a general takeaway from research is:

One clear gap in current literature is measuring long-term adaptive problem-solving – in an era of rapidly changing technology, how do we train people not just to solve today’s known problems, but to learn how to learn for solving tomorrow’s unprecedented problems? Future research is pointing toward meta-cognitive training, cross-domain experiences, and perhaps human-AI teaming as areas to explore (where an AI might handle routine aspects so humans focus on creative aspects, raising new questions about what skills are most needed – possibly a new kind of problem-solving literacy).

Actionable Frameworks and Tools

In day-to-day practice, problem-solvers often rely on tangible tools and frameworks to structure their thinking. These tools can be analog (pen-and-paper diagrams) or digital, but their role is to guide the process, capture thoughts, and communicate reasoning. Let’s look at some popular ones and how to use them effectively in technical contexts:

Decision Trees

A Decision Tree is a graphical representation of choices and their possible consequences, including chance event outcomes, resource costs, and utility. It looks like a branching tree where each node represents a decision or a random event, and each branch represents an outcome or choice. Decision trees are particularly useful for making decisions under uncertainty or with multiple stages.

How to use: Start with a root node that states the initial decision to be made or situation. Draw branches for each possible action or option. If there are uncertainties (like “if market goes up or down” or “if the test passes or fails”), from each branch draw chance nodes (often depicted as circles) that branch into the possible outcomes, with probabilities if known. At the end of each path, write the outcome or payoff (could be a quantitative value like cost, time, or an indicator of success). Once the tree is constructed, you can evaluate it by working backwards (a process called “folding back”): calculate expected values at chance nodes, compare benefits at decision nodes to pick the best branch.

Example (Software context): Suppose you are deciding between two technical approaches for a project – rewrite the system from scratch or refactor incrementally. A decision tree could incorporate factors like: if you rewrite, there’s a risk (say 30% probability) of delay by 3 months; if you refactor, there’s a risk of only partially addressing the issues. You assign costs to delays and benefits to improved performance, then use the tree to see which decision yields a better expected outcome. This structured approach forces you to consider scenarios (best case, worst case, likely case) explicitly.

Tips: Keep the tree to a reasonable size; too many branches can get unwieldy. Focus on decisions and uncertainties that significantly affect the outcome. Use estimated numbers where possible (even if rough) – this adds objectivity. Decision trees pair well with sensitivity analysis: by tweaking probabilities or payoffs, you see if the preferred decision changes (which tells you how robust your decision is to uncertainty). In code or algorithm design, decision trees can also be directly implemented for automated decision-making (as in AI or machine learning classification trees), but here we focus on the human decision support aspect.

Mind Maps

A Mind Map is a visual brainstorming tool that organizes ideas radiating from a central concept. It’s essentially a diagram where the central problem or theme sits in the middle, and branches (usually curvy lines) emanate outward to subtopics, which can further branch into sub-subtopics. Mind maps leverage the brain’s associative nature – they’re great for idea generation, organizing thoughts, and seeing relationships.

How to use: Write the core problem or topic in the center. For example, the problem could be “Improve Website Performance”. Then draw branches out for main categories you want to explore: “Frontend Optimization”, “Backend Optimization”, “Infrastructure”, “User Behavior”, etc. Then, for each of those, branch further: under Frontend, you might have “Minify CSS/JS”, “Lazy-load Images”, “Cache Assets”, etc. Under Backend: “Database indexing”, “Query optimization”, “Concurrent processing”, etc. The map can grow as you think of more sub-ideas. Use keywords, not long sentences, in each node. You can also add small illustrations or icons – some people find this helps memory and creativity.

Why it helps: Mind maps tap into radiant thinking – one idea sparks another in a non-linear way. They’re very useful when you need to brainstorm requirements, causes of a problem, or potential solutions. For instance, if you’re trying to diagnose an unclear issue, you might put the symptom in the middle and branch out potential causes (sort of a free-form fishbone). Unlike a list, a mind map doesn’t force hierarchy too early; it encourages you to dump ideas and then see structure.

Tips: Don’t worry about order or correctness in the brainstorming phase – put all ideas out, you can prune later. Use colors or different line styles to group related branches (e.g. all performance ideas in red). Mind mapping software can be handy (like XMind, MindMeister, or even drawing tools in Notion/OneNote), especially for rearranging and sharing maps. After creating a mind map, you might convert it to a more structured document – the mind map’s value was in generation and initial organization, but often you’ll present the results in a linear way.

Fishbone (Ishikawa) Diagram

We introduced the Fishbone Diagram earlier as part of Root Cause Analysis. It’s worth reiterating here as a tool because it’s straightforward and widely applicable for diagnosing problems. The fishbone helps systematically list possible causes of a problem by category.

How to use: Draw a horizontal line (the fish’s spine) pointing to the right, where the head will be the problem statement (effect). Draw angled lines (fishbones) off the spine for each major category of causes. Standard categories depend on context: in manufacturing it’s often Man, Machine, Method, Material, Measurement, Environment. In software projects, you might define categories like: People (skills, staffing, communication), Process (requirements, testing, deployment process), Tools/Technology (frameworks, libraries, hardware), External (third-party services, external APIs, clients). Write these at the end of each bone. Then for each category, brainstorm specific causes and draw them as smaller branches of that bone. For example, if the problem is “High latency in web application”, under Tools/Technology you might have sub-branches “Database queries not optimized” and “Server GC pauses”; under Process you might have “No load testing done” as a cause; under External maybe “Third-party API slow responses”.

Once populated, you review all potential causes on the fishbone and identify which are most likely or require further investigation.

Tips: Be specific in writing causes. Instead of “Database issues” write “Insufficient indexing on user table” (specific cause). The fishbone is a team exercise typically – use it in a meeting or retrospective to get everyone’s input on causes. This encourages a shared understanding of the problem’s complexity. After making it, it’s common to decide on actions or data needed to confirm which causes are the real ones. Not every branch will turn out to matter, but having them laid out prevents tunnel vision on one hypothesis. The fishbone diagram is a cause-generating tool; you typically follow it with data gathering or experiments to validate causes.

SWOT Analysis

SWOT stands for Strengths, Weaknesses, Opportunities, Threats. It’s a framework mainly used in strategic planning and decision-making to evaluate an idea or situation from four key angles:

SWOT is often depicted as a 2x2 grid, with Strengths/Weaknesses on top (internal) and Opportunities/Threats on bottom (external).

How to use: Clearly define the subject of the SWOT – e.g., “Launch of Product X” or “Team’s data analytics capability” or even personal career planning. Then brainstorm bullet points for each of the four categories. Be honest and specific. For a software startup considering a new product: Strengths might include “Innovative algorithm (patented), Agile team of 5 experienced devs”; Weaknesses: “Limited marketing budget, No in-house UX designer”; Opportunities: “Growing demand in this sector, competitor product has poor reviews – we can fill gap”; Threats: “A big tech company rumored to enter this space, evolving privacy regulations could impose constraints.” Once listed, SWOT can help you decide if you should proceed, where to focus (leverage strengths, shore up weaknesses), and what strategy to use (e.g., match your strengths to opportunities, create contingency plans for threats).

Tips: Use SWOT as a discussion tool – it’s often the conversation around each point that yields insights. It’s qualitative, so to make it actionable, prioritize the points: which strengths are core to build on, which weaknesses are urgent to fix, etc. In technical decision-making, SWOT can be applied, for example, to choosing a technology: Strengths (of using Tech A) vs Weaknesses, etc., combined with external factors (Opportunity: a strong community support for A, Threat: A is new and not battle-tested). This can supplement more quantitative analysis. Ensure to update SWOT as things change; it’s a snapshot in time.

OODA Loop

The OODA Loop is a decision-making framework developed by US Air Force Colonel John Boyd, originally for air combat, but widely applicable to business and agile environments. OODA stands for Observe, Orient, Decide, Act, and it is meant to be a continuously looping cycle rather than a one-time process. The goal is to execute this loop faster and better than an opponent or faster than conditions change, thereby gaining an advantage through agility.

The loop emphasizes agility – being able to iterate through these steps faster can outmaneuver competitors or adapt to change quickly. For instance, in cybersecurity incident response (which often employs OODA thinking), the team that quickly observes an attack, correctly orients by identifying the attack type, decides on countermeasures, and acts to implement defenses will mitigate damage much more effectively than a slow-moving team.

Strengths: OODA is excellent for dynamic problem-solving where conditions evolve and you can’t find a static one-time solution. It encourages continuous learning – every action gives you feedback (via observation) to adjust your approach. It’s implicitly iterative and empirically-driven, much like agile methodologies. In competitive scenarios (business competition, cyber war, sports), OODA gives a mental model to “stay ahead” by cycling faster than the opponent. The concept of orientation highlights how critical our mental models are – reminding us to update our assumptions and perspectives as we get new information (instead of sticking to a plan blindly).

Use in software/tech: Startup companies often live by an OODA-like loop: they observe market response to their product, orient by analyzing metrics and user feedback, decide on a pivot or feature change, act by releasing an update, then observe again how it goes. The ones that iterate quickly can outrun competitors (who may be larger but slower). In DevOps, this is analogous to monitoring, incident response, fix, and redeploy cycles. Another use is personal productivity – for example, a developer debugging an issue can think in OODA terms: Observe (collect logs, error messages), Orient (hypothesize cause from the data, recall similar bugs), Decide (pick a likely cause to test or a fix to apply), Act (apply fix, deploy), then Observe the result of the fix, and repeat if not resolved.

Tips: Boyd’s key insight was tempo. Simply going through OODA is not enough; how fast and correctly you cycle matters. So one actionable tip is to cut down the time in each step without sacrificing too much accuracy. For instance, don’t get paralyzed in Orientation analyzing data for too long – often a timely decision with imperfect info beats a perfect decision made too late. Use automation and tools to speed up Observation (e.g., dashboards, alerts) and some parts of Orientation (data analysis). Also, be willing to discard or update your mental model if observations contradict it – that’s effective re-orientation. Teams can explicitly practice OODA by running wargame simulations or drills where they must go through the cycle rapidly (chaos engineering in IT is somewhat akin to this – introduce an outage and practice the loop).

Figure: An infographic illustrating how NASA engineers on the ground helped the Apollo 13 crew improvise a CO₂ scrubber adapter (“fit a square peg in a round hole”) using only materials available in the spacecraft. This real-life case demonstrates creative problem-solving under extreme constraints and time pressure.

Case Studies: Problem-Solving in Action

To ground the frameworks and techniques in reality, let’s examine a few case studies from different domains (software, business, science/engineering) that highlight problem-solving best practices and success patterns:

Case Study 1: Software Development – Apollo 13 “Square Peg in a Round Hole”

One of the most dramatic problem-solving feats in engineering history took place during the Apollo 13 mission (1970). Although Apollo 13 is a space mission (science/engineering), we include it as an analog for creative troubleshooting under pressure, akin to debugging a critical production incident in software.

The Problem: An oxygen tank explosion left the Apollo 13 Command Module crippled, and the three astronauts had to use the Lunar Module as a “lifeboat.” However, the Lunar Module’s life support was only designed for 2 people for 2 days, and the CO₂ scrubbers (which remove carbon dioxide from the air) were quickly becoming saturated with three people on board. The Command Module had plenty of fresh CO₂ scrubber canisters, but there was a catch: the Command Module canisters were square and the Lunar Module sockets were round – they literally had to fit a square peg in a round hole to use the spare canisters. If they failed, the crew would asphyxiate before returning to Earth.

Solution Process: NASA engineers in Mission Control sprang into action. This was essentially an extreme example of creative problem solving with constraints – they could only use materials known to be on the spacecraft. The team emptied boxes of spacecraft equipment on a table (maps, space suit hoses, duct tape, etc.) to see what they had to work with. Through brainstorming and rapid prototyping on the ground, they devised an improvised adapter using a plastic bag, cardboard from a checklist cover, lots of duct tape, and a sock, among other items. The idea was to tape the square canister into one end of a hose assembly built from these materials, creating an airtight fit into the round hole.

They communicated the step-by-step assembly instructions to the astronauts via voice (no pictures could be sent). The astronauts followed the “recipe” and successfully built the jury-rigged adapter, allowing the square lithium hydroxide canisters to function in the round LiOH slots of the Lunar Module. The CO₂ levels began dropping and stayed in the safe range for the rest of the journey, saving the crew.

Key Takeaways: This case is famous for demonstrating lateral thinking and rapid prototyping. Several principles and methodologies are illustrated:

While our software problems are rarely life-or-death in 24 hours, we can still learn from Apollo 13 to embrace constraints, trust teamwork, and think flexibly. Many post-mortems of IT outages reference Apollo 13 as inspiration for calm problem-solving under pressure.

Case Study 2: Business Strategy – Airbnb’s Turnaround with Design Thinking

Context: In 2009, Airbnb was a struggling startup. They had launched their platform for people to rent out airbeds and rooms, but weren’t gaining traction – revenues were flat (~$200/week) and the founders were close to quitting. They had a problem: lots of apartments were listed in New York City (their biggest market), but customers weren’t booking.

Problem Identification: By observing their own website and listings (empathizing with the user), the founders noticed almost all listings had poor, amateur photos – dark, low resolution, unappealing. Users couldn’t see the value in what they might rent. Traditional startup wisdom might say “this doesn’t scale” or focus on SEO or other issues, but through a design-thinking lens they reframed the problem as: “People aren’t booking because they can’t see the space quality – we need to showcase it better.”

Solution (Scrappy and User-Centric): Paul Graham of Y Combinator advised them to do things that don’t scale: he suggested they go to New York, meet hosts, and take high-quality photos themselves. The Airbnb founders flew to NYC with a decent camera, visited hosts, and replaced the bad pics with beautiful, wide-angle, well-lit photos. Essentially, they implemented a quick prototype solution – better photos – to test if this would improve bookings.

Result: The impact was immediate: within a week, weekly revenue doubled from $200 to $400. This was the first sign of growth in months. The better visual presentation made the listings far more attractive, and bookings followed. This success validated the hypothesis that presentation was key. Airbnb then institutionalized this focus on design – eventually offering free professional photography to hosts as a program. That attention to user experience (in this case, how the property is presented to the guest) became a cornerstone of Airbnb’s brand.

Key Takeaways: Airbnb’s story highlights several themes:

For practitioners, Airbnb’s story suggests: when faced with stagnation, go back to your users, observe directly, and don’t be afraid to address problems in an unconventional, even manual way as a test. Sometimes the breakthrough is in the basics (like clear photos or clear information) rather than something high-tech.

Case Study 3: Scientific Research – Discovery of Penicillin (Serendipity and Prepared Minds)

Scientific breakthroughs often involve long, methodical problem-solving – but some also involve serendipitous events recognized by prepared problem-solvers. A classic example is the discovery of penicillin by Sir Alexander Fleming in 1928.

Problem (Background): Fleming was researching bacteria and had been searching for antibacterial agents (a hot topic after WWI to combat infections). It wasn’t a directed problem like “find penicillin”, but generally “how to kill bacteria without harming patients.”

Serendipitous Observation: Fleming had a tendency to be a bit untidy in his lab. Famously, he left a petri dish of Staphylococcus bacteria out while he went on vacation. Upon return, he observed something unusual: a mold (later identified as Penicillium notatum) had contaminated the dish, and around the mold colony, the bacteria were killed off – a clear ring where no bacteria grew. Instead of discarding the “failed” experiment, Fleming’s curiosity was piqued. He observed carefully and oriented by recalling his knowledge: he knew certain molds can produce antibacterial substances (there were earlier hints in literature). He hypothesized that the mold was secreting something that killed the bacteria.

Problem-Solving & Experimentation: Fleming isolated the mold and grew it in a pure culture, then tested its filtrate on other bacteria. He found it was effective in killing many Gram-positive bacteria. This was the decide and act part – he decided this phenomenon was worth pursuing and acted by conducting more tests. However, he faced new problems: penicillin (as he named the substance) was difficult to purify and produce in quantity. Fleming himself wasn’t able to solve that, and it took about a decade more for a team at Oxford (Chain, Florey and others) to figure out how to mass produce penicillin as a drug by 1940s, which then saved countless lives in WWII and beyond.

Key Takeaways:

Analogy to software/tech: Penicillin’s discovery has parallels in technology – many inventions came from noticing “happy accidents”. For instance, Post-it Notes glue was a failed attempt to make a strong adhesive but was recognized as a useful low-tack glue by a 3M scientist; or the creation of the first wiki happened when Ward Cunningham noticed engineers collaborating via email and thought to create a better tool (observing a need from something not working smoothly). The lesson is to maintain a curious mindset. When something unintended happens – a test passes when it shouldn’t, a user uses your product in an unexpected way – instead of dismissing it, investigate. You might find an opportunity or a hidden bug (or both!).

Case Study 4: Engineering/Systems – Toyota’s Lean Manufacturing and the 5 Whys

Context: Toyota, in the mid 20th century, revolutionized manufacturing with the Toyota Production System (TPS), introducing lean principles and techniques like just-in-time production, jidoka (automation with a human touch), and continuous improvement (Kaizen). A big part of their success was a disciplined approach to problem-solving on the factory floor.

Problem Example: Let’s take a generic but common scenario from manufacturing (as recounted by Taiichi Ohno of Toyota): A machine stops working on an assembly line, halting production.

At Toyota, the response was to apply the “Five Whys” to find the root cause:

Through this, the root cause identified is the lack of a filter in the lubrication system. The action is then to install a filter (and replace the shaft and fuse). If they had stopped at the first “why” (blown fuse) and just replaced the fuse, they’d have a recurrent problem.

Result: By systematically using this on numerous issues, Toyota was able to address underlying issues and dramatically improve reliability and efficiency. This contributed to Toyota’s reputation for high quality and continuous improvement, making them one of the top automakers.

Key Takeaways:

Analogy in Software: Many software teams now use a “5 Whys” or similar approach in incident postmortems. For example, an outage might be traced: Why outage? – Deploy caused service crash. Why deploy caused crash? – A bug in code not caught by tests. Why not caught? – Tests didn’t cover that scenario. Why not? – The scenario wasn’t considered in requirements (or dev was rushed). Why rushed? – Perhaps unrealistic deadlines or lack of code review. Now you have actionable causes: improve test coverage, adjust planning process, etc. The idea is the same: keep digging until you find process or system changes that will prevent classes of problems, not just the one instance.

These case studies underscore that real-world problem-solving often requires a blend of techniques:

Across all, a few patterns emerge: clarity in defining the true problem, willingness to iterate/experiment, and learning from each attempt. And importantly, sharing the knowledge so others can build on it – a solved problem for one person can become a known technique for many (as we are doing in this guide).

Advanced Problem-Solving Techniques and Tips

For those who want to take their problem-solving skills to the next level, especially in technical fields, it helps to practice and internalize some advanced techniques. These techniques—abstraction, decomposition, analogical thinking, and computational modeling—are often used by top engineers and scientists, sometimes unconsciously. By consciously developing these skills, you can tackle more complex problems with confidence.

Abstraction

Abstraction is the art of focusing on the important information while ignoring irrelevant details. In problem-solving, it means creating a simplified model of the problem. You abstract away complexities to get to the core. This is ubiquitous in software engineering (think of how high-level programming languages abstract machine code, or how we use abstract data types like “list” without worrying about memory addresses). In everyday terms, drawing a map is an abstraction of reality – you omit most real-world details to show only what matters for navigation.

How to apply: When faced with a complicated problem, ask yourself:

Math and computational thinking provide tools for abstraction: defining variables to represent problem elements, using diagrams or formulas. A practical tip is to restate the problem in abstract terms. Suppose you have a scheduling conflict in project management: rephrase it as an abstract scheduling problem (resources vs. tasks over time) – this can sometimes reveal it’s analogous to a known problem like “multiprocessor scheduling” for which algorithms exist.

Training tip: A good exercise is solving puzzles or coding challenges that force you to abstract. For instance, project Euler problems or algorithmic challenges: you often have to turn a word problem into a mathematical model before solving. Another exercise is to take a real-world scenario and draw an abstract model (like a flowchart or a set of equations). Also, when reading others’ solutions or code, notice how they choose appropriate levels of abstraction (e.g., a function name calculateTax() abstracts the details of tax calculation).

Caution: After abstracting, remember to map back to reality. A solution to the abstract problem must be translated to the concrete one. Abstraction is a balance: too much abstraction and you might miss an important detail (e.g. ignoring a constraint that later breaks your solution), too little and you get lost in the weeds. With practice, you get better at choosing the right level of abstraction for the task at hand.

Decomposition (Divide and Conquer)

Decomposition is breaking a large problem into smaller, more manageable sub-problems, solving each part, and then combining them to form a solution to the original problem. This is one of the most powerful techniques in both engineering and general problem-solving because it tackles complexity by splitting it.

How to apply: The approach often called “divide and conquer” in algorithms is exactly this. Identify independent or logically separate parts of the problem. For example:

A more formal method of decomposition for decision problems is to use decision trees or stepwise refinement. In Polya’s terms, one of the heuristic strategies was: solve a simpler (or smaller) problem first. That’s decomposition – find a subproblem that you know how to solve, solve it, then use that to help solve the bigger problem.

Training tip: Practice breaking things down deliberately. When given a project or a large task, don’t jump in headfirst – spend some time listing the sub-tasks. Some people like to visualize this as an outline or tree. If you do competitive programming or algorithmic puzzles, always think: can the problem be split into sub-tasks (e.g., sort input first, then process, etc.)? Take something like the classic Tower of Hanoi puzzle – it’s a great exercise in recursive decomposition (move N-1 disks, then move the largest, then move N-1 back). Recursion in programming is a formal way to apply divide and conquer.

Caution: Sometimes subproblems are not as independent as they seem – solving one might change another. So after decomposing and solving parts, always integrate and test the whole, as interactions might introduce issues. Also, don’t lose sight of the big picture; constantly ask “how do these pieces fit together?” (Systems thinking helps here, to ensure your decomposition covers all aspects and the interfaces between parts are managed.)

Analogical Thinking

Analogical thinking means using an analogy – drawing parallels from one domain or problem to another – to transfer knowledge or solutions. It’s essentially asking: “Have I (or has someone) solved a problem similar to this before?” Sometimes the analogous problem comes from a completely different field, but the structure of the problem or solution might fit.

We saw analogies in action: Velcro’s invention from burrs sticking to fur, or using nature as inspiration for designs (biomimicry, like the kingfisher-inspired bullet train nose). In software, design patterns are a form of analogy – a pattern is like saying “this problem is like that classic problem we know, so use the known solution structure.”

How to apply: When stuck, consciously search your memory (or even literature) for similar problems or systems:

Analogies can also be internal: use simpler analogies to understand a problem, like the common analogy of water flowing in pipes for electric current – which might help an engineer solve a circuit issue by thinking in terms of fluid pressure and flow (so-called isomorphic problems in cognitive science).

Training tip: Increase your knowledge base to have more material for analogies. Reading broadly (not just in your field) arms you with analogies from science, nature, history, etc. Practice using analogies by taking a concept you know well and mapping its elements onto a new problem (even if just as a thought experiment). Puzzle games that involve analogies (like certain brain teasers) can sharpen this thinking too. Also, when you learn any solution, store it not just as specific, but as a pattern that might apply elsewhere. For instance, knowing the “breaking a problem into smaller ones” reminds of recursion; next time you face a scheduling problem, you might recall it’s analogous to bin-packing or some known NP-hard problem, which tells you to use approximation.

Caution: Ensure the analogy truly fits; forcing a bad analogy can mislead. Always check where the analogy breaks down. It’s a starting point for insight, not the final proof. And be mindful of context – two problems may look similar but differ in a critical assumption.

Computational Modeling and Simulation

With the power of computers, computational modeling has become a go-to advanced technique for complex problem-solving. This means building a computer model (mathematical or algorithmic) of the problem and running simulations to understand behavior or test solutions.

How to apply: Identify the key variables and rules of your system. Create a model – it could be:

For example, if you’re solving a queuing problem (e.g. reducing wait time in a call center or server request queue), you might simulate different scheduling algorithms or numbers of servers. If you’re designing a new processor, you simulate its logic on various workloads before ever manufacturing. In software architecture, you can simulate traffic to see how your system scales (using load testing tools as a form of simulation).

Why useful: Models let you safely explore scenarios. They can reveal emergent behavior that’s hard to reason analytically. For instance, you might simulate an epidemic spread (as we all saw with COVID models) to see how different interventions affect outcomes – it’s not obvious without the sim, because of nonlinear interactions.

In less grand terms, even writing a quick script to brute-force a small version of a problem can give insight. Let’s say you have an NP-hard optimization problem in your project (like scheduling tasks with constraints). You might not find a formula easily, but you can write a program to try many combinations for a smaller instance, observe patterns, maybe derive a heuristic from that.

Training tip: If you haven’t already, build some familiarity with tools for modeling: Excel (for simple models or Monte Carlo simulations), Python (with libraries like SimPy for simulation or even just writing loops), MATLAB or Mathematica for mathematical modeling, or specialized simulation software in your field (for networks, consider ns-3 or Cisco Packet Tracer; for business processes, maybe Arena). Even learning to use a spreadsheet with random functions to simulate, say, 1000 runs of a project’s schedule with varying delays (to see probability of finishing by deadline) can improve your intuition.

Also, learn basic statistics and probability, because interpreting simulation results and ensuring they’re valid requires understanding of variance, confidence, etc. Competitions like Kaggle (data science) can indirectly train you in modeling, as you’ll create models of data to solve problems.

Caution: Models are only as good as their assumptions. There’s a risk of the model not matching reality (e.g., missing a factor). Always validate your model on known cases if possible. And watch out for overfitting – if you adjust your model too much to match observed data, it might lose generality. Keep models as simple as possible while still capturing essential behavior (Occam’s Razor principle in modeling). Also consider the cost: high-fidelity simulations can be computationally expensive; sometimes a rough analytical estimate is sufficient and faster.

Putting it All Together

Advanced problem solvers often combine these techniques. For instance, when facing a new, complex challenge, they might:

  1. Abstract the problem to understand its essence.
  2. Decompose it into parts they can tackle.
  3. Recall an analogy from a past project or known pattern that applies to one or more parts.
  4. Solve subproblems (perhaps using analogies or established methods).
  5. Use computational modeling to test the interactions of the parts or refine certain solutions.
  6. Iterate back – maybe the model reveals a need for a different abstraction or another decomposition.
  7. All the while, remain aware of cognitive biases (making sure they aren’t sticking to a wrong analogy out of fixation, etc.) and maintain clarity on objectives.

Practical tip: Develop a personal toolkit or checklist. Some engineers have mental checklists like: “If stuck, try a simpler case; if still stuck, draw a diagram; list assumptions; consider analogies; break it down; simulate if possible; consult someone for a fresh perspective.” This kind of systematic approach ensures you eventually hit upon a fruitful technique. Over time, this becomes second nature.

Remember, these advanced skills improve with deliberate practice:

In the next section, we’ll talk about how to continuously develop such skills over the long term.

Developing Problem-Solving Skills: Training and Routines

Improving problem-solving is a journey. While reading about techniques is valuable, real growth comes from practice, reflection, and continuous learning. Here we outline strategies for training problem-solving abilities and sustaining their improvement, whether you’re an engineer or an interested lifelong learner:

Deliberate Practice and Challenges

Just as one would practice piano or sports, practicing problem-solving in a focused way is key. This means tackling problems that are just outside your comfort zone (not too easy, not impossibly hard) and learning from the experience.

The concept of deliberate practice, coined by Anders Ericsson, suggests focusing on specific sub-skills and getting feedback. So if you notice, for example, you struggle with formulating the problem (the understanding phase), practice just that: take a complex scenario and spend time only on clearly defining the problem (maybe write a one-page problem statement) and have someone review if it’s clear. If you struggle with mathematical modeling, take simple physical problems and try to derive equations, then check with known solutions.

Reflection and Metacognition

Solving problems alone isn’t enough; reflecting on how you solved them and how you could improve is crucial to get better long-term (this is metacognition – thinking about thinking).

Educational Programs and Resources

For a structured development of problem-solving, many programs and resources are available:

Many of these resources emphasize routines:

Routines and Habits

Incorporate problem-solving into your daily/weekly routine so it becomes second nature:

Gaps and Continuing Growth

Even with all these strategies, acknowledge that one can always improve. Some current gaps in typical training:

In essence, treat problem-solving as a lifelong practice. Every challenge at work or in life is an opportunity to apply and hone your skills. Over years, you build an intuition – that “problem-solving sense” – which draws on a wealth of experiences and patterns. At the same time, humility is important: knowing your limits and knowing when to seek help or new knowledge is part of being an expert problem-solver.

With structured training, conscious routines, and a passion for learning, intermediate engineers and even the general public can significantly enhance their problem-solving prowess. This not only leads to better outcomes in projects or work but also builds confidence and adaptability in the face of any challenge.

Future Directions and Conclusion

As we conclude this practitioner’s guide, it’s worth reflecting on the current gaps and future opportunities in problem-solving literature and practice:

In conclusion, problem-solving is as much an art as a science. We now have a rich arsenal of frameworks (from cognitive approaches to engineering methodologies), a solid understanding of the mental processes involved (and their pitfalls), and a variety of practical tools to apply. A successful problem-solver is one who continually learns, adapts, and reflects – turning each solved problem into a stepping stone for tackling the next.

Whether you are an intermediate engineer looking to level up or a professional in another field, applying the concepts from this guide – understanding the problem deeply, leveraging the right methodology, using tools to structure your approach, and learning from each outcome – will set you on the path of continuous improvement. The problems of the world, big and small, await creative and analytical minds. With the insights and techniques outlined here, you are better equipped to face them.

Remember the simple but profound words of George Pólya, which still ring true: Solving problems is a practical art, like swimming, or skiing, or playing the piano: you can learn it only by imitation and practice.” So dive in, practice often, reflect on your experiences, and you will progressively become the adept problem-solver you aspire to be.

References

  1. Polya, G. (1945). How to Solve It. Princeton University Press – (Four-step problem-solving framework: Understand, Plan, Execute, Review).
  2. Wing, J. (2011). “Computational thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent.” – Definition of computational thinking.
  3. Koen, B. (2003). Discussion of the Method: Conducting the Engineer's Approach to Problem Solving. Oxford University Press – (“An engineer solves problems using heuristics... complex and poorly understood situation with limited resources...” quote).
  4. IResearchNet Psychology Article on Problem Solving – Overview of cognitive processes (memory, reasoning) and barriers (functional fixedness, confirmation bias).
  5. ASQ (American Society for Quality). What is Root Cause Analysis? – Definition and approaches to RCA.
  6. Nielsen Norman Group (2016). Design Thinking 101 by S. Gibbons – Definition and phases of Design Thinking (user-centric innovation).
  7. Wikipedia: TRIZ – Theory of Inventive Problem Solving (40 principles, contradiction analysis).
  8. Wikipedia: Systems Thinking – Definition: viewing problems in terms of wholes and relationships rather than parts.
  9. Graphite Blog – Brainwriting: Better alternative to brainstorming – Cites Paulus’s research on group vs individual brainstorming effectiveness.
  10. Forrester Consulting (2018). The Total Economic Impact of IBM’s Design Thinking – Finding of 2x faster to market and 300% ROI from adopting design thinking.
  11. First Round Review (2015). How Design Thinking Transformed Airbnb – Joe Gebbia story on improving photos doubling revenue.
  12. Space Center Houston (2019). Apollo 13 CO2 scrubber infographic – Details steps NASA engineers used to improvise a CO2 filter adapter.
  13. Toyota / Taiichi Ohno. The Toyota Production System – Example of 5 Whys analysis in manufacturing (e.g., machine stopped scenario).
  14. Chi, M., Glaser, R., Rees, E. (1982). Expertise in problem solving – Studies showing experts categorize and solve problems using deep structure (expert vs novice differences).
  15. Csikszentmihalyi, M. (1996). Creativity: Flow and the Psychology of Discovery and Invention – Discusses problem-finding as a key component of creative breakthroughs.
  16. Kahneman, D. (2011). Thinking, Fast and Slow – Overview of cognitive biases and their impact on decision-making (e.g., confirmation bias, anchoring).
  17. Leetaru, K. (2019). Forbes: How AI is helping in problem solving – Example of hybrid human-AI approach to complex problems.
  18. Ericsson, A. (2008). Deliberate Practice and Acquisition of Expert Performance – Importance of focused practice and feedback in skill development (applies to problem-solving skills).
  19. Meadows, D. (2008). Thinking in Systems: A Primer – Introduction to systems thinking, feedback loops, and leverage points.
  20. Jonassen, D. (1997). Instructional design model for well-structured and ill-structured problem-solving learning outcomes – Emphasizes practice on ill-structured problems for transfer of skills.

problem-solving