We demonstrate the utility of multiplexing through an application to high throughput screening (HTS), a process central to drug discovery. In HTS we search through sometimes millions of chemical compounds to find molecules that bind to a biological target. This process is costly, thus any modifications to make the screen more efficient would reduce the total cost of drug discovery. The current experiment design tests each compound from a library of compounds individually on a biological assay. Considering the fact that the number of “hits” or active compounds is a small fraction of the library, this design is very inefficient and wasteful. We implement an alternate experimental design similar to combinatorial group testing that allows the same chemical library to be tested in much fewer experiments. However, instead of constructing these designs we machine learn them. We use a Bayesian posterior calculation and an information-theoretic metric, entropy, to score a multiplexed design and use this metric to optimize the design using a modified genetic algorithm. Using our method, we have created optimal experiment designs that use significantly fewer assay measurements while respecting the constraint of not mixing too many compounds in a test. We were able to design experiments that can detect between 0 and 2 active compounds (<10% success) in a compound library of 20 compounds with 100% accuracy using as low as 11 assays (a 45% reduction in effort from current naïve testing methods) while mixing no more than 5 compounds in any test. These designs are applicable in general to several systems in biology that involve testing the effect of a large number of entities on the system. We believe that this generalized in-silico design method can help systems biology consolidate its goal of “automated, high-throughput model-influenced experimentation”.