We present an "Artificial Neural Tissue" (ANT) architecture as a control system for autonomous multirobot tasks. This architecture combines a typical neural-network structure with a coarse-coding strategy that permits specialized areas to develop in the tissue which in turn allows such emergent capabilities as task decomposition. Only a single global fitness function and a set of allowable basis behaviors need be specified. An evolutionary (Darwinian) selection process is used to derive controllers for the task in simulation. This process results in the emergence of novel functionality through the task decomposition of mission goals. ANT-based controllers are shown to exhibit self-organization, employ stigmergy and make use of templates (unlabeled environmental cues). These controllers have been tested on a multirobot resource-collection task in which teams of robots with no explicit supervision can successfully avoid obstacles, explore terrain, locate resource material and collect it in a designated area by using a light beacon for reference and interpreting unlabeled perimeter markings. The issues of scalability and antagonism are addressed.