enumerative.py 43 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157
  1. """
  2. Algorithms and classes to support enumerative combinatorics.
  3. Currently just multiset partitions, but more could be added.
  4. Terminology (following Knuth, algorithm 7.1.2.5M TAOCP)
  5. *multiset* aaabbcccc has a *partition* aaabc | bccc
  6. The submultisets, aaabc and bccc of the partition are called
  7. *parts*, or sometimes *vectors*. (Knuth notes that multiset
  8. partitions can be thought of as partitions of vectors of integers,
  9. where the ith element of the vector gives the multiplicity of
  10. element i.)
  11. The values a, b and c are *components* of the multiset. These
  12. correspond to elements of a set, but in a multiset can be present
  13. with a multiplicity greater than 1.
  14. The algorithm deserves some explanation.
  15. Think of the part aaabc from the multiset above. If we impose an
  16. ordering on the components of the multiset, we can represent a part
  17. with a vector, in which the value of the first element of the vector
  18. corresponds to the multiplicity of the first component in that
  19. part. Thus, aaabc can be represented by the vector [3, 1, 1]. We
  20. can also define an ordering on parts, based on the lexicographic
  21. ordering of the vector (leftmost vector element, i.e., the element
  22. with the smallest component number, is the most significant), so
  23. that [3, 1, 1] > [3, 1, 0] and [3, 1, 1] > [2, 1, 4]. The ordering
  24. on parts can be extended to an ordering on partitions: First, sort
  25. the parts in each partition, left-to-right in decreasing order. Then
  26. partition A is greater than partition B if A's leftmost/greatest
  27. part is greater than B's leftmost part. If the leftmost parts are
  28. equal, compare the second parts, and so on.
  29. In this ordering, the greatest partition of a given multiset has only
  30. one part. The least partition is the one in which the components
  31. are spread out, one per part.
  32. The enumeration algorithms in this file yield the partitions of the
  33. argument multiset in decreasing order. The main data structure is a
  34. stack of parts, corresponding to the current partition. An
  35. important invariant is that the parts on the stack are themselves in
  36. decreasing order. This data structure is decremented to find the
  37. next smaller partition. Most often, decrementing the partition will
  38. only involve adjustments to the smallest parts at the top of the
  39. stack, much as adjacent integers *usually* differ only in their last
  40. few digits.
  41. Knuth's algorithm uses two main operations on parts:
  42. Decrement - change the part so that it is smaller in the
  43. (vector) lexicographic order, but reduced by the smallest amount possible.
  44. For example, if the multiset has vector [5,
  45. 3, 1], and the bottom/greatest part is [4, 2, 1], this part would
  46. decrement to [4, 2, 0], while [4, 0, 0] would decrement to [3, 3,
  47. 1]. A singleton part is never decremented -- [1, 0, 0] is not
  48. decremented to [0, 3, 1]. Instead, the decrement operator needs
  49. to fail for this case. In Knuth's pseudocode, the decrement
  50. operator is step m5.
  51. Spread unallocated multiplicity - Once a part has been decremented,
  52. it cannot be the rightmost part in the partition. There is some
  53. multiplicity that has not been allocated, and new parts must be
  54. created above it in the stack to use up this multiplicity. To
  55. maintain the invariant that the parts on the stack are in
  56. decreasing order, these new parts must be less than or equal to
  57. the decremented part.
  58. For example, if the multiset is [5, 3, 1], and its most
  59. significant part has just been decremented to [5, 3, 0], the
  60. spread operation will add a new part so that the stack becomes
  61. [[5, 3, 0], [0, 0, 1]]. If the most significant part (for the
  62. same multiset) has been decremented to [2, 0, 0] the stack becomes
  63. [[2, 0, 0], [2, 0, 0], [1, 3, 1]]. In the pseudocode, the spread
  64. operation for one part is step m2. The complete spread operation
  65. is a loop of steps m2 and m3.
  66. In order to facilitate the spread operation, Knuth stores, for each
  67. component of each part, not just the multiplicity of that component
  68. in the part, but also the total multiplicity available for this
  69. component in this part or any lesser part above it on the stack.
  70. One added twist is that Knuth does not represent the part vectors as
  71. arrays. Instead, he uses a sparse representation, in which a
  72. component of a part is represented as a component number (c), plus
  73. the multiplicity of the component in that part (v) as well as the
  74. total multiplicity available for that component (u). This saves
  75. time that would be spent skipping over zeros.
  76. """
  77. class PartComponent:
  78. """Internal class used in support of the multiset partitions
  79. enumerators and the associated visitor functions.
  80. Represents one component of one part of the current partition.
  81. A stack of these, plus an auxiliary frame array, f, represents a
  82. partition of the multiset.
  83. Knuth's pseudocode makes c, u, and v separate arrays.
  84. """
  85. __slots__ = ('c', 'u', 'v')
  86. def __init__(self):
  87. self.c = 0 # Component number
  88. self.u = 0 # The as yet unpartitioned amount in component c
  89. # *before* it is allocated by this triple
  90. self.v = 0 # Amount of c component in the current part
  91. # (v<=u). An invariant of the representation is
  92. # that the next higher triple for this component
  93. # (if there is one) will have a value of u-v in
  94. # its u attribute.
  95. def __repr__(self):
  96. "for debug/algorithm animation purposes"
  97. return 'c:%d u:%d v:%d' % (self.c, self.u, self.v)
  98. def __eq__(self, other):
  99. """Define value oriented equality, which is useful for testers"""
  100. return (isinstance(other, self.__class__) and
  101. self.c == other.c and
  102. self.u == other.u and
  103. self.v == other.v)
  104. def __ne__(self, other):
  105. """Defined for consistency with __eq__"""
  106. return not self == other
  107. # This function tries to be a faithful implementation of algorithm
  108. # 7.1.2.5M in Volume 4A, Combinatoral Algorithms, Part 1, of The Art
  109. # of Computer Programming, by Donald Knuth. This includes using
  110. # (mostly) the same variable names, etc. This makes for rather
  111. # low-level Python.
  112. # Changes from Knuth's pseudocode include
  113. # - use PartComponent struct/object instead of 3 arrays
  114. # - make the function a generator
  115. # - map (with some difficulty) the GOTOs to Python control structures.
  116. # - Knuth uses 1-based numbering for components, this code is 0-based
  117. # - renamed variable l to lpart.
  118. # - flag variable x takes on values True/False instead of 1/0
  119. #
  120. def multiset_partitions_taocp(multiplicities):
  121. """Enumerates partitions of a multiset.
  122. Parameters
  123. ==========
  124. multiplicities
  125. list of integer multiplicities of the components of the multiset.
  126. Yields
  127. ======
  128. state
  129. Internal data structure which encodes a particular partition.
  130. This output is then usually processed by a visitor function
  131. which combines the information from this data structure with
  132. the components themselves to produce an actual partition.
  133. Unless they wish to create their own visitor function, users will
  134. have little need to look inside this data structure. But, for
  135. reference, it is a 3-element list with components:
  136. f
  137. is a frame array, which is used to divide pstack into parts.
  138. lpart
  139. points to the base of the topmost part.
  140. pstack
  141. is an array of PartComponent objects.
  142. The ``state`` output offers a peek into the internal data
  143. structures of the enumeration function. The client should
  144. treat this as read-only; any modification of the data
  145. structure will cause unpredictable (and almost certainly
  146. incorrect) results. Also, the components of ``state`` are
  147. modified in place at each iteration. Hence, the visitor must
  148. be called at each loop iteration. Accumulating the ``state``
  149. instances and processing them later will not work.
  150. Examples
  151. ========
  152. >>> from sympy.utilities.enumerative import list_visitor
  153. >>> from sympy.utilities.enumerative import multiset_partitions_taocp
  154. >>> # variables components and multiplicities represent the multiset 'abb'
  155. >>> components = 'ab'
  156. >>> multiplicities = [1, 2]
  157. >>> states = multiset_partitions_taocp(multiplicities)
  158. >>> list(list_visitor(state, components) for state in states)
  159. [[['a', 'b', 'b']],
  160. [['a', 'b'], ['b']],
  161. [['a'], ['b', 'b']],
  162. [['a'], ['b'], ['b']]]
  163. See Also
  164. ========
  165. sympy.utilities.iterables.multiset_partitions: Takes a multiset
  166. as input and directly yields multiset partitions. It
  167. dispatches to a number of functions, including this one, for
  168. implementation. Most users will find it more convenient to
  169. use than multiset_partitions_taocp.
  170. """
  171. # Important variables.
  172. # m is the number of components, i.e., number of distinct elements
  173. m = len(multiplicities)
  174. # n is the cardinality, total number of elements whether or not distinct
  175. n = sum(multiplicities)
  176. # The main data structure, f segments pstack into parts. See
  177. # list_visitor() for example code indicating how this internal
  178. # state corresponds to a partition.
  179. # Note: allocation of space for stack is conservative. Knuth's
  180. # exercise 7.2.1.5.68 gives some indication of how to tighten this
  181. # bound, but this is not implemented.
  182. pstack = [PartComponent() for i in range(n * m + 1)]
  183. f = [0] * (n + 1)
  184. # Step M1 in Knuth (Initialize)
  185. # Initial state - entire multiset in one part.
  186. for j in range(m):
  187. ps = pstack[j]
  188. ps.c = j
  189. ps.u = multiplicities[j]
  190. ps.v = multiplicities[j]
  191. # Other variables
  192. f[0] = 0
  193. a = 0
  194. lpart = 0
  195. f[1] = m
  196. b = m # in general, current stack frame is from a to b - 1
  197. while True:
  198. while True:
  199. # Step M2 (Subtract v from u)
  200. j = a
  201. k = b
  202. x = False
  203. while j < b:
  204. pstack[k].u = pstack[j].u - pstack[j].v
  205. if pstack[k].u == 0:
  206. x = True
  207. elif not x:
  208. pstack[k].c = pstack[j].c
  209. pstack[k].v = min(pstack[j].v, pstack[k].u)
  210. x = pstack[k].u < pstack[j].v
  211. k = k + 1
  212. else: # x is True
  213. pstack[k].c = pstack[j].c
  214. pstack[k].v = pstack[k].u
  215. k = k + 1
  216. j = j + 1
  217. # Note: x is True iff v has changed
  218. # Step M3 (Push if nonzero.)
  219. if k > b:
  220. a = b
  221. b = k
  222. lpart = lpart + 1
  223. f[lpart + 1] = b
  224. # Return to M2
  225. else:
  226. break # Continue to M4
  227. # M4 Visit a partition
  228. state = [f, lpart, pstack]
  229. yield state
  230. # M5 (Decrease v)
  231. while True:
  232. j = b-1
  233. while (pstack[j].v == 0):
  234. j = j - 1
  235. if j == a and pstack[j].v == 1:
  236. # M6 (Backtrack)
  237. if lpart == 0:
  238. return
  239. lpart = lpart - 1
  240. b = a
  241. a = f[lpart]
  242. # Return to M5
  243. else:
  244. pstack[j].v = pstack[j].v - 1
  245. for k in range(j + 1, b):
  246. pstack[k].v = pstack[k].u
  247. break # GOTO M2
  248. # --------------- Visitor functions for multiset partitions ---------------
  249. # A visitor takes the partition state generated by
  250. # multiset_partitions_taocp or other enumerator, and produces useful
  251. # output (such as the actual partition).
  252. def factoring_visitor(state, primes):
  253. """Use with multiset_partitions_taocp to enumerate the ways a
  254. number can be expressed as a product of factors. For this usage,
  255. the exponents of the prime factors of a number are arguments to
  256. the partition enumerator, while the corresponding prime factors
  257. are input here.
  258. Examples
  259. ========
  260. To enumerate the factorings of a number we can think of the elements of the
  261. partition as being the prime factors and the multiplicities as being their
  262. exponents.
  263. >>> from sympy.utilities.enumerative import factoring_visitor
  264. >>> from sympy.utilities.enumerative import multiset_partitions_taocp
  265. >>> from sympy import factorint
  266. >>> primes, multiplicities = zip(*factorint(24).items())
  267. >>> primes
  268. (2, 3)
  269. >>> multiplicities
  270. (3, 1)
  271. >>> states = multiset_partitions_taocp(multiplicities)
  272. >>> list(factoring_visitor(state, primes) for state in states)
  273. [[24], [8, 3], [12, 2], [4, 6], [4, 2, 3], [6, 2, 2], [2, 2, 2, 3]]
  274. """
  275. f, lpart, pstack = state
  276. factoring = []
  277. for i in range(lpart + 1):
  278. factor = 1
  279. for ps in pstack[f[i]: f[i + 1]]:
  280. if ps.v > 0:
  281. factor *= primes[ps.c] ** ps.v
  282. factoring.append(factor)
  283. return factoring
  284. def list_visitor(state, components):
  285. """Return a list of lists to represent the partition.
  286. Examples
  287. ========
  288. >>> from sympy.utilities.enumerative import list_visitor
  289. >>> from sympy.utilities.enumerative import multiset_partitions_taocp
  290. >>> states = multiset_partitions_taocp([1, 2, 1])
  291. >>> s = next(states)
  292. >>> list_visitor(s, 'abc') # for multiset 'a b b c'
  293. [['a', 'b', 'b', 'c']]
  294. >>> s = next(states)
  295. >>> list_visitor(s, [1, 2, 3]) # for multiset '1 2 2 3
  296. [[1, 2, 2], [3]]
  297. """
  298. f, lpart, pstack = state
  299. partition = []
  300. for i in range(lpart+1):
  301. part = []
  302. for ps in pstack[f[i]:f[i+1]]:
  303. if ps.v > 0:
  304. part.extend([components[ps.c]] * ps.v)
  305. partition.append(part)
  306. return partition
  307. class MultisetPartitionTraverser():
  308. """
  309. Has methods to ``enumerate`` and ``count`` the partitions of a multiset.
  310. This implements a refactored and extended version of Knuth's algorithm
  311. 7.1.2.5M [AOCP]_."
  312. The enumeration methods of this class are generators and return
  313. data structures which can be interpreted by the same visitor
  314. functions used for the output of ``multiset_partitions_taocp``.
  315. Examples
  316. ========
  317. >>> from sympy.utilities.enumerative import MultisetPartitionTraverser
  318. >>> m = MultisetPartitionTraverser()
  319. >>> m.count_partitions([4,4,4,2])
  320. 127750
  321. >>> m.count_partitions([3,3,3])
  322. 686
  323. See Also
  324. ========
  325. multiset_partitions_taocp
  326. sympy.utilities.iterables.multiset_partitions
  327. References
  328. ==========
  329. .. [AOCP] Algorithm 7.1.2.5M in Volume 4A, Combinatoral Algorithms,
  330. Part 1, of The Art of Computer Programming, by Donald Knuth.
  331. .. [Factorisatio] On a Problem of Oppenheim concerning
  332. "Factorisatio Numerorum" E. R. Canfield, Paul Erdos, Carl
  333. Pomerance, JOURNAL OF NUMBER THEORY, Vol. 17, No. 1. August
  334. 1983. See section 7 for a description of an algorithm
  335. similar to Knuth's.
  336. .. [Yorgey] Generating Multiset Partitions, Brent Yorgey, The
  337. Monad.Reader, Issue 8, September 2007.
  338. """
  339. def __init__(self):
  340. self.debug = False
  341. # TRACING variables. These are useful for gathering
  342. # statistics on the algorithm itself, but have no particular
  343. # benefit to a user of the code.
  344. self.k1 = 0
  345. self.k2 = 0
  346. self.p1 = 0
  347. self.pstack = None
  348. self.f = None
  349. self.lpart = 0
  350. self.discarded = 0
  351. # dp_stack is list of lists of (part_key, start_count) pairs
  352. self.dp_stack = []
  353. # dp_map is map part_key-> count, where count represents the
  354. # number of multiset which are descendants of a part with this
  355. # key, **or any of its decrements**
  356. # Thus, when we find a part in the map, we add its count
  357. # value to the running total, cut off the enumeration, and
  358. # backtrack
  359. if not hasattr(self, 'dp_map'):
  360. self.dp_map = {}
  361. def db_trace(self, msg):
  362. """Useful for understanding/debugging the algorithms. Not
  363. generally activated in end-user code."""
  364. if self.debug:
  365. # XXX: animation_visitor is undefined... Clearly this does not
  366. # work and was not tested. Previous code in comments below.
  367. raise RuntimeError
  368. #letters = 'abcdefghijklmnopqrstuvwxyz'
  369. #state = [self.f, self.lpart, self.pstack]
  370. #print("DBG:", msg,
  371. # ["".join(part) for part in list_visitor(state, letters)],
  372. # animation_visitor(state))
  373. #
  374. # Helper methods for enumeration
  375. #
  376. def _initialize_enumeration(self, multiplicities):
  377. """Allocates and initializes the partition stack.
  378. This is called from the enumeration/counting routines, so
  379. there is no need to call it separately."""
  380. num_components = len(multiplicities)
  381. # cardinality is the total number of elements, whether or not distinct
  382. cardinality = sum(multiplicities)
  383. # pstack is the partition stack, which is segmented by
  384. # f into parts.
  385. self.pstack = [PartComponent() for i in
  386. range(num_components * cardinality + 1)]
  387. self.f = [0] * (cardinality + 1)
  388. # Initial state - entire multiset in one part.
  389. for j in range(num_components):
  390. ps = self.pstack[j]
  391. ps.c = j
  392. ps.u = multiplicities[j]
  393. ps.v = multiplicities[j]
  394. self.f[0] = 0
  395. self.f[1] = num_components
  396. self.lpart = 0
  397. # The decrement_part() method corresponds to step M5 in Knuth's
  398. # algorithm. This is the base version for enum_all(). Modified
  399. # versions of this method are needed if we want to restrict
  400. # sizes of the partitions produced.
  401. def decrement_part(self, part):
  402. """Decrements part (a subrange of pstack), if possible, returning
  403. True iff the part was successfully decremented.
  404. If you think of the v values in the part as a multi-digit
  405. integer (least significant digit on the right) this is
  406. basically decrementing that integer, but with the extra
  407. constraint that the leftmost digit cannot be decremented to 0.
  408. Parameters
  409. ==========
  410. part
  411. The part, represented as a list of PartComponent objects,
  412. which is to be decremented.
  413. """
  414. plen = len(part)
  415. for j in range(plen - 1, -1, -1):
  416. if j == 0 and part[j].v > 1 or j > 0 and part[j].v > 0:
  417. # found val to decrement
  418. part[j].v -= 1
  419. # Reset trailing parts back to maximum
  420. for k in range(j + 1, plen):
  421. part[k].v = part[k].u
  422. return True
  423. return False
  424. # Version to allow number of parts to be bounded from above.
  425. # Corresponds to (a modified) step M5.
  426. def decrement_part_small(self, part, ub):
  427. """Decrements part (a subrange of pstack), if possible, returning
  428. True iff the part was successfully decremented.
  429. Parameters
  430. ==========
  431. part
  432. part to be decremented (topmost part on the stack)
  433. ub
  434. the maximum number of parts allowed in a partition
  435. returned by the calling traversal.
  436. Notes
  437. =====
  438. The goal of this modification of the ordinary decrement method
  439. is to fail (meaning that the subtree rooted at this part is to
  440. be skipped) when it can be proved that this part can only have
  441. child partitions which are larger than allowed by ``ub``. If a
  442. decision is made to fail, it must be accurate, otherwise the
  443. enumeration will miss some partitions. But, it is OK not to
  444. capture all the possible failures -- if a part is passed that
  445. shouldn't be, the resulting too-large partitions are filtered
  446. by the enumeration one level up. However, as is usual in
  447. constrained enumerations, failing early is advantageous.
  448. The tests used by this method catch the most common cases,
  449. although this implementation is by no means the last word on
  450. this problem. The tests include:
  451. 1) ``lpart`` must be less than ``ub`` by at least 2. This is because
  452. once a part has been decremented, the partition
  453. will gain at least one child in the spread step.
  454. 2) If the leading component of the part is about to be
  455. decremented, check for how many parts will be added in
  456. order to use up the unallocated multiplicity in that
  457. leading component, and fail if this number is greater than
  458. allowed by ``ub``. (See code for the exact expression.) This
  459. test is given in the answer to Knuth's problem 7.2.1.5.69.
  460. 3) If there is *exactly* enough room to expand the leading
  461. component by the above test, check the next component (if
  462. it exists) once decrementing has finished. If this has
  463. ``v == 0``, this next component will push the expansion over the
  464. limit by 1, so fail.
  465. """
  466. if self.lpart >= ub - 1:
  467. self.p1 += 1 # increment to keep track of usefulness of tests
  468. return False
  469. plen = len(part)
  470. for j in range(plen - 1, -1, -1):
  471. # Knuth's mod, (answer to problem 7.2.1.5.69)
  472. if j == 0 and (part[0].v - 1)*(ub - self.lpart) < part[0].u:
  473. self.k1 += 1
  474. return False
  475. if j == 0 and part[j].v > 1 or j > 0 and part[j].v > 0:
  476. # found val to decrement
  477. part[j].v -= 1
  478. # Reset trailing parts back to maximum
  479. for k in range(j + 1, plen):
  480. part[k].v = part[k].u
  481. # Have now decremented part, but are we doomed to
  482. # failure when it is expanded? Check one oddball case
  483. # that turns out to be surprisingly common - exactly
  484. # enough room to expand the leading component, but no
  485. # room for the second component, which has v=0.
  486. if (plen > 1 and part[1].v == 0 and
  487. (part[0].u - part[0].v) ==
  488. ((ub - self.lpart - 1) * part[0].v)):
  489. self.k2 += 1
  490. self.db_trace("Decrement fails test 3")
  491. return False
  492. return True
  493. return False
  494. def decrement_part_large(self, part, amt, lb):
  495. """Decrements part, while respecting size constraint.
  496. A part can have no children which are of sufficient size (as
  497. indicated by ``lb``) unless that part has sufficient
  498. unallocated multiplicity. When enforcing the size constraint,
  499. this method will decrement the part (if necessary) by an
  500. amount needed to ensure sufficient unallocated multiplicity.
  501. Returns True iff the part was successfully decremented.
  502. Parameters
  503. ==========
  504. part
  505. part to be decremented (topmost part on the stack)
  506. amt
  507. Can only take values 0 or 1. A value of 1 means that the
  508. part must be decremented, and then the size constraint is
  509. enforced. A value of 0 means just to enforce the ``lb``
  510. size constraint.
  511. lb
  512. The partitions produced by the calling enumeration must
  513. have more parts than this value.
  514. """
  515. if amt == 1:
  516. # In this case we always need to increment, *before*
  517. # enforcing the "sufficient unallocated multiplicity"
  518. # constraint. Easiest for this is just to call the
  519. # regular decrement method.
  520. if not self.decrement_part(part):
  521. return False
  522. # Next, perform any needed additional decrementing to respect
  523. # "sufficient unallocated multiplicity" (or fail if this is
  524. # not possible).
  525. min_unalloc = lb - self.lpart
  526. if min_unalloc <= 0:
  527. return True
  528. total_mult = sum(pc.u for pc in part)
  529. total_alloc = sum(pc.v for pc in part)
  530. if total_mult <= min_unalloc:
  531. return False
  532. deficit = min_unalloc - (total_mult - total_alloc)
  533. if deficit <= 0:
  534. return True
  535. for i in range(len(part) - 1, -1, -1):
  536. if i == 0:
  537. if part[0].v > deficit:
  538. part[0].v -= deficit
  539. return True
  540. else:
  541. return False # This shouldn't happen, due to above check
  542. else:
  543. if part[i].v >= deficit:
  544. part[i].v -= deficit
  545. return True
  546. else:
  547. deficit -= part[i].v
  548. part[i].v = 0
  549. def decrement_part_range(self, part, lb, ub):
  550. """Decrements part (a subrange of pstack), if possible, returning
  551. True iff the part was successfully decremented.
  552. Parameters
  553. ==========
  554. part
  555. part to be decremented (topmost part on the stack)
  556. ub
  557. the maximum number of parts allowed in a partition
  558. returned by the calling traversal.
  559. lb
  560. The partitions produced by the calling enumeration must
  561. have more parts than this value.
  562. Notes
  563. =====
  564. Combines the constraints of _small and _large decrement
  565. methods. If returns success, part has been decremented at
  566. least once, but perhaps by quite a bit more if needed to meet
  567. the lb constraint.
  568. """
  569. # Constraint in the range case is just enforcing both the
  570. # constraints from _small and _large cases. Note the 0 as the
  571. # second argument to the _large call -- this is the signal to
  572. # decrement only as needed to for constraint enforcement. The
  573. # short circuiting and left-to-right order of the 'and'
  574. # operator is important for this to work correctly.
  575. return self.decrement_part_small(part, ub) and \
  576. self.decrement_part_large(part, 0, lb)
  577. def spread_part_multiplicity(self):
  578. """Returns True if a new part has been created, and
  579. adjusts pstack, f and lpart as needed.
  580. Notes
  581. =====
  582. Spreads unallocated multiplicity from the current top part
  583. into a new part created above the current on the stack. This
  584. new part is constrained to be less than or equal to the old in
  585. terms of the part ordering.
  586. This call does nothing (and returns False) if the current top
  587. part has no unallocated multiplicity.
  588. """
  589. j = self.f[self.lpart] # base of current top part
  590. k = self.f[self.lpart + 1] # ub of current; potential base of next
  591. base = k # save for later comparison
  592. changed = False # Set to true when the new part (so far) is
  593. # strictly less than (as opposed to less than
  594. # or equal) to the old.
  595. for j in range(self.f[self.lpart], self.f[self.lpart + 1]):
  596. self.pstack[k].u = self.pstack[j].u - self.pstack[j].v
  597. if self.pstack[k].u == 0:
  598. changed = True
  599. else:
  600. self.pstack[k].c = self.pstack[j].c
  601. if changed: # Put all available multiplicity in this part
  602. self.pstack[k].v = self.pstack[k].u
  603. else: # Still maintaining ordering constraint
  604. if self.pstack[k].u < self.pstack[j].v:
  605. self.pstack[k].v = self.pstack[k].u
  606. changed = True
  607. else:
  608. self.pstack[k].v = self.pstack[j].v
  609. k = k + 1
  610. if k > base:
  611. # Adjust for the new part on stack
  612. self.lpart = self.lpart + 1
  613. self.f[self.lpart + 1] = k
  614. return True
  615. return False
  616. def top_part(self):
  617. """Return current top part on the stack, as a slice of pstack.
  618. """
  619. return self.pstack[self.f[self.lpart]:self.f[self.lpart + 1]]
  620. # Same interface and functionality as multiset_partitions_taocp(),
  621. # but some might find this refactored version easier to follow.
  622. def enum_all(self, multiplicities):
  623. """Enumerate the partitions of a multiset.
  624. Examples
  625. ========
  626. >>> from sympy.utilities.enumerative import list_visitor
  627. >>> from sympy.utilities.enumerative import MultisetPartitionTraverser
  628. >>> m = MultisetPartitionTraverser()
  629. >>> states = m.enum_all([2,2])
  630. >>> list(list_visitor(state, 'ab') for state in states)
  631. [[['a', 'a', 'b', 'b']],
  632. [['a', 'a', 'b'], ['b']],
  633. [['a', 'a'], ['b', 'b']],
  634. [['a', 'a'], ['b'], ['b']],
  635. [['a', 'b', 'b'], ['a']],
  636. [['a', 'b'], ['a', 'b']],
  637. [['a', 'b'], ['a'], ['b']],
  638. [['a'], ['a'], ['b', 'b']],
  639. [['a'], ['a'], ['b'], ['b']]]
  640. See Also
  641. ========
  642. multiset_partitions_taocp():
  643. which provides the same result as this method, but is
  644. about twice as fast. Hence, enum_all is primarily useful
  645. for testing. Also see the function for a discussion of
  646. states and visitors.
  647. """
  648. self._initialize_enumeration(multiplicities)
  649. while True:
  650. while self.spread_part_multiplicity():
  651. pass
  652. # M4 Visit a partition
  653. state = [self.f, self.lpart, self.pstack]
  654. yield state
  655. # M5 (Decrease v)
  656. while not self.decrement_part(self.top_part()):
  657. # M6 (Backtrack)
  658. if self.lpart == 0:
  659. return
  660. self.lpart -= 1
  661. def enum_small(self, multiplicities, ub):
  662. """Enumerate multiset partitions with no more than ``ub`` parts.
  663. Equivalent to enum_range(multiplicities, 0, ub)
  664. Parameters
  665. ==========
  666. multiplicities
  667. list of multiplicities of the components of the multiset.
  668. ub
  669. Maximum number of parts
  670. Examples
  671. ========
  672. >>> from sympy.utilities.enumerative import list_visitor
  673. >>> from sympy.utilities.enumerative import MultisetPartitionTraverser
  674. >>> m = MultisetPartitionTraverser()
  675. >>> states = m.enum_small([2,2], 2)
  676. >>> list(list_visitor(state, 'ab') for state in states)
  677. [[['a', 'a', 'b', 'b']],
  678. [['a', 'a', 'b'], ['b']],
  679. [['a', 'a'], ['b', 'b']],
  680. [['a', 'b', 'b'], ['a']],
  681. [['a', 'b'], ['a', 'b']]]
  682. The implementation is based, in part, on the answer given to
  683. exercise 69, in Knuth [AOCP]_.
  684. See Also
  685. ========
  686. enum_all, enum_large, enum_range
  687. """
  688. # Keep track of iterations which do not yield a partition.
  689. # Clearly, we would like to keep this number small.
  690. self.discarded = 0
  691. if ub <= 0:
  692. return
  693. self._initialize_enumeration(multiplicities)
  694. while True:
  695. while self.spread_part_multiplicity():
  696. self.db_trace('spread 1')
  697. if self.lpart >= ub:
  698. self.discarded += 1
  699. self.db_trace(' Discarding')
  700. self.lpart = ub - 2
  701. break
  702. else:
  703. # M4 Visit a partition
  704. state = [self.f, self.lpart, self.pstack]
  705. yield state
  706. # M5 (Decrease v)
  707. while not self.decrement_part_small(self.top_part(), ub):
  708. self.db_trace("Failed decrement, going to backtrack")
  709. # M6 (Backtrack)
  710. if self.lpart == 0:
  711. return
  712. self.lpart -= 1
  713. self.db_trace("Backtracked to")
  714. self.db_trace("decrement ok, about to expand")
  715. def enum_large(self, multiplicities, lb):
  716. """Enumerate the partitions of a multiset with lb < num(parts)
  717. Equivalent to enum_range(multiplicities, lb, sum(multiplicities))
  718. Parameters
  719. ==========
  720. multiplicities
  721. list of multiplicities of the components of the multiset.
  722. lb
  723. Number of parts in the partition must be greater than
  724. this lower bound.
  725. Examples
  726. ========
  727. >>> from sympy.utilities.enumerative import list_visitor
  728. >>> from sympy.utilities.enumerative import MultisetPartitionTraverser
  729. >>> m = MultisetPartitionTraverser()
  730. >>> states = m.enum_large([2,2], 2)
  731. >>> list(list_visitor(state, 'ab') for state in states)
  732. [[['a', 'a'], ['b'], ['b']],
  733. [['a', 'b'], ['a'], ['b']],
  734. [['a'], ['a'], ['b', 'b']],
  735. [['a'], ['a'], ['b'], ['b']]]
  736. See Also
  737. ========
  738. enum_all, enum_small, enum_range
  739. """
  740. self.discarded = 0
  741. if lb >= sum(multiplicities):
  742. return
  743. self._initialize_enumeration(multiplicities)
  744. self.decrement_part_large(self.top_part(), 0, lb)
  745. while True:
  746. good_partition = True
  747. while self.spread_part_multiplicity():
  748. if not self.decrement_part_large(self.top_part(), 0, lb):
  749. # Failure here should be rare/impossible
  750. self.discarded += 1
  751. good_partition = False
  752. break
  753. # M4 Visit a partition
  754. if good_partition:
  755. state = [self.f, self.lpart, self.pstack]
  756. yield state
  757. # M5 (Decrease v)
  758. while not self.decrement_part_large(self.top_part(), 1, lb):
  759. # M6 (Backtrack)
  760. if self.lpart == 0:
  761. return
  762. self.lpart -= 1
  763. def enum_range(self, multiplicities, lb, ub):
  764. """Enumerate the partitions of a multiset with
  765. ``lb < num(parts) <= ub``.
  766. In particular, if partitions with exactly ``k`` parts are
  767. desired, call with ``(multiplicities, k - 1, k)``. This
  768. method generalizes enum_all, enum_small, and enum_large.
  769. Examples
  770. ========
  771. >>> from sympy.utilities.enumerative import list_visitor
  772. >>> from sympy.utilities.enumerative import MultisetPartitionTraverser
  773. >>> m = MultisetPartitionTraverser()
  774. >>> states = m.enum_range([2,2], 1, 2)
  775. >>> list(list_visitor(state, 'ab') for state in states)
  776. [[['a', 'a', 'b'], ['b']],
  777. [['a', 'a'], ['b', 'b']],
  778. [['a', 'b', 'b'], ['a']],
  779. [['a', 'b'], ['a', 'b']]]
  780. """
  781. # combine the constraints of the _large and _small
  782. # enumerations.
  783. self.discarded = 0
  784. if ub <= 0 or lb >= sum(multiplicities):
  785. return
  786. self._initialize_enumeration(multiplicities)
  787. self.decrement_part_large(self.top_part(), 0, lb)
  788. while True:
  789. good_partition = True
  790. while self.spread_part_multiplicity():
  791. self.db_trace("spread 1")
  792. if not self.decrement_part_large(self.top_part(), 0, lb):
  793. # Failure here - possible in range case?
  794. self.db_trace(" Discarding (large cons)")
  795. self.discarded += 1
  796. good_partition = False
  797. break
  798. elif self.lpart >= ub:
  799. self.discarded += 1
  800. good_partition = False
  801. self.db_trace(" Discarding small cons")
  802. self.lpart = ub - 2
  803. break
  804. # M4 Visit a partition
  805. if good_partition:
  806. state = [self.f, self.lpart, self.pstack]
  807. yield state
  808. # M5 (Decrease v)
  809. while not self.decrement_part_range(self.top_part(), lb, ub):
  810. self.db_trace("Failed decrement, going to backtrack")
  811. # M6 (Backtrack)
  812. if self.lpart == 0:
  813. return
  814. self.lpart -= 1
  815. self.db_trace("Backtracked to")
  816. self.db_trace("decrement ok, about to expand")
  817. def count_partitions_slow(self, multiplicities):
  818. """Returns the number of partitions of a multiset whose elements
  819. have the multiplicities given in ``multiplicities``.
  820. Primarily for comparison purposes. It follows the same path as
  821. enumerate, and counts, rather than generates, the partitions.
  822. See Also
  823. ========
  824. count_partitions
  825. Has the same calling interface, but is much faster.
  826. """
  827. # number of partitions so far in the enumeration
  828. self.pcount = 0
  829. self._initialize_enumeration(multiplicities)
  830. while True:
  831. while self.spread_part_multiplicity():
  832. pass
  833. # M4 Visit (count) a partition
  834. self.pcount += 1
  835. # M5 (Decrease v)
  836. while not self.decrement_part(self.top_part()):
  837. # M6 (Backtrack)
  838. if self.lpart == 0:
  839. return self.pcount
  840. self.lpart -= 1
  841. def count_partitions(self, multiplicities):
  842. """Returns the number of partitions of a multiset whose components
  843. have the multiplicities given in ``multiplicities``.
  844. For larger counts, this method is much faster than calling one
  845. of the enumerators and counting the result. Uses dynamic
  846. programming to cut down on the number of nodes actually
  847. explored. The dictionary used in order to accelerate the
  848. counting process is stored in the ``MultisetPartitionTraverser``
  849. object and persists across calls. If the user does not
  850. expect to call ``count_partitions`` for any additional
  851. multisets, the object should be cleared to save memory. On
  852. the other hand, the cache built up from one count run can
  853. significantly speed up subsequent calls to ``count_partitions``,
  854. so it may be advantageous not to clear the object.
  855. Examples
  856. ========
  857. >>> from sympy.utilities.enumerative import MultisetPartitionTraverser
  858. >>> m = MultisetPartitionTraverser()
  859. >>> m.count_partitions([9,8,2])
  860. 288716
  861. >>> m.count_partitions([2,2])
  862. 9
  863. >>> del m
  864. Notes
  865. =====
  866. If one looks at the workings of Knuth's algorithm M [AOCP]_, it
  867. can be viewed as a traversal of a binary tree of parts. A
  868. part has (up to) two children, the left child resulting from
  869. the spread operation, and the right child from the decrement
  870. operation. The ordinary enumeration of multiset partitions is
  871. an in-order traversal of this tree, and with the partitions
  872. corresponding to paths from the root to the leaves. The
  873. mapping from paths to partitions is a little complicated,
  874. since the partition would contain only those parts which are
  875. leaves or the parents of a spread link, not those which are
  876. parents of a decrement link.
  877. For counting purposes, it is sufficient to count leaves, and
  878. this can be done with a recursive in-order traversal. The
  879. number of leaves of a subtree rooted at a particular part is a
  880. function only of that part itself, so memoizing has the
  881. potential to speed up the counting dramatically.
  882. This method follows a computational approach which is similar
  883. to the hypothetical memoized recursive function, but with two
  884. differences:
  885. 1) This method is iterative, borrowing its structure from the
  886. other enumerations and maintaining an explicit stack of
  887. parts which are in the process of being counted. (There
  888. may be multisets which can be counted reasonably quickly by
  889. this implementation, but which would overflow the default
  890. Python recursion limit with a recursive implementation.)
  891. 2) Instead of using the part data structure directly, a more
  892. compact key is constructed. This saves space, but more
  893. importantly coalesces some parts which would remain
  894. separate with physical keys.
  895. Unlike the enumeration functions, there is currently no _range
  896. version of count_partitions. If someone wants to stretch
  897. their brain, it should be possible to construct one by
  898. memoizing with a histogram of counts rather than a single
  899. count, and combining the histograms.
  900. """
  901. # number of partitions so far in the enumeration
  902. self.pcount = 0
  903. # dp_stack is list of lists of (part_key, start_count) pairs
  904. self.dp_stack = []
  905. self._initialize_enumeration(multiplicities)
  906. pkey = part_key(self.top_part())
  907. self.dp_stack.append([(pkey, 0), ])
  908. while True:
  909. while self.spread_part_multiplicity():
  910. pkey = part_key(self.top_part())
  911. if pkey in self.dp_map:
  912. # Already have a cached value for the count of the
  913. # subtree rooted at this part. Add it to the
  914. # running counter, and break out of the spread
  915. # loop. The -1 below is to compensate for the
  916. # leaf that this code path would otherwise find,
  917. # and which gets incremented for below.
  918. self.pcount += (self.dp_map[pkey] - 1)
  919. self.lpart -= 1
  920. break
  921. else:
  922. self.dp_stack.append([(pkey, self.pcount), ])
  923. # M4 count a leaf partition
  924. self.pcount += 1
  925. # M5 (Decrease v)
  926. while not self.decrement_part(self.top_part()):
  927. # M6 (Backtrack)
  928. for key, oldcount in self.dp_stack.pop():
  929. self.dp_map[key] = self.pcount - oldcount
  930. if self.lpart == 0:
  931. return self.pcount
  932. self.lpart -= 1
  933. # At this point have successfully decremented the part on
  934. # the stack and it does not appear in the cache. It needs
  935. # to be added to the list at the top of dp_stack
  936. pkey = part_key(self.top_part())
  937. self.dp_stack[-1].append((pkey, self.pcount),)
  938. def part_key(part):
  939. """Helper for MultisetPartitionTraverser.count_partitions that
  940. creates a key for ``part``, that only includes information which can
  941. affect the count for that part. (Any irrelevant information just
  942. reduces the effectiveness of dynamic programming.)
  943. Notes
  944. =====
  945. This member function is a candidate for future exploration. There
  946. are likely symmetries that can be exploited to coalesce some
  947. ``part_key`` values, and thereby save space and improve
  948. performance.
  949. """
  950. # The component number is irrelevant for counting partitions, so
  951. # leave it out of the memo key.
  952. rval = []
  953. for ps in part:
  954. rval.append(ps.u)
  955. rval.append(ps.v)
  956. return tuple(rval)