Computer vision is typically thought of as an open-universe problem because every possible outcome is unknown. Image segmentation via fuzzy-spatial-taxon-cut reduces image segmentation to a closed-universe problem by assuming a standardized natural-scene-taxonomy, comprised of spatial-taxons.
People describe spatial-taxons as thing-like, a group of things or the foreground. They share properties, border ownership in particular, with proto-objects described in biological vision . By defining spatial-taxons in a hierarchy, we operationalize the image segmentation problem into
a series of iterative two-class inferences. As described in earlier publications, this method out performs other segmentation methods for well-defined image classes and forms the basis of some commercial image-processing systems. This paper explores how the methodology used to provide the
inputs to the low-level color-parsing stage affects overall image segmentation performance by comparing the effects of two methods: fuzzy constraint and Bayes classifier. We discuss how these methods alter the performance the of two-class fuzzy inference system discussed in earlier work.