Analyzing my own meta-algorithm reveals three basic approaches I use for producing  algorithms.  Reviewing literature on algorithmic problem solving also frequently reveals three or four but they are not identical to mine.

My first approach, which I rarely see described in these terms, is the analytic interview.  While it is somewhat analogous to Polya's “understand the problem” (Brookshear, 2008, p216) it is psychologically focused.  I really do find that many, many 'problems' aren't.  They appear problematic due to framing or phrasing.  Walid's lunch problem (Salem, 2010) is an example.  If the intent is to find an adequate solution, most questions can be clarified and so answered, or moved on to my second approach.  There are, however, an infinite number of 'problems' that can be made insoluble simply by excluding necessary information or insisting on contrafactual axioms, and an interview should make this distinction obvious.  It is also possible that during this interview process pertinent changes previously assumed to be unrelated to the issue will be revealed.

Some call my second approach 'divide and conquer'.  It might be most analogous to the reductionist approach, but it has the inherent efficiency of cutting every problem in half before analysis.  It may be easiest to describe as it applies to connectivity issues.  Before one even asks what the actual failure might be a quick sketch of how the system should work will give one a halfway point to test.  For example, in a large WAN there is generally a router approximately equidistant from the terminating points of the issue.  Connectivity to this point, or the lack thereof, immediately reduces the problem scope by fifty percent.  Repeating this process can reduce hundreds of potential problems to a handful in a matter of minutes.

If interviews and iterative halving of the problem fail I may 'shotgun' the problem.  I am aware how undesirable the side-effects of shotgun troubleshooting can be, having left the occasional solution that nobody understands and which may not be replicable in my wake.  However, when time is critical we can't always afford the luxury of adequate analysis and just visiting every accessible part of the system (even those with little apparent bearing on the issue) that could conceivably contribute to the solution as rapidly as possible sometimes makes sense.  And it is usually possible to look back at the last few changes made, deduce what was effective and document that.

Researching others' algorithm creation algorithms points out a lot of similarities and overlap with my own, but our different experiences and goals assure the representations will be different.  I take Polya's phase four “evaluate the program/solution for accuracy and for its potential as a tool for solving other problems” (Ibid., p216)) rather for granted, as I do Fahd(n.d.)'s: “Look for a related problem that was solved before”.  There's no reason to reinvent the wheel but I lump these activities into the interview or documentation phase.

I imagine there are as many meta-algorithms as there are problem solvers.  Polya gave us an excellent description, and we can see how even an eleven step algorithm creation algorithm (University of Michigan, n.d.) can be summarized in his four.  I'd also mention Bloom's taxonomy (Wikipedia, 2010) but I've already gone on too long.


Brookshear, J. Glenn (2009) Computer Science: An Overview, 10th ed.  Boston: Pearson Education, Inc.


King Fahd University of Petroleum and Minerals (n.d.) Algorithms and Problem Solving [Online].  Available from: (Accessed 13 February, 2010).


Salem, Walid (2010) RE: DQ2: Solvable/Unsolvable Problems (Whole Class) [Online].  Available from: (Accessed 14 February, 2010).


University of Michigan (n.d.) Thoughts on Problem Solving: Algorithm [Online].  Available from: (Accessed 14 February, 2010).


Wikipedia (2010) Bloom's Taxonomy [Online].  Available from:'s_Taxonomy (Accessed 14 February, 2010).