GRID is a large multitalker audiovisual sentence corpus to support joint computational-behavioral studies in speech perception. In brief, the corpus consists of high-quality audio and video (facial) recordings of 1000 sentences spoken by each of 34 talkers (18 male, 16 female). Sentences are of the form "put red at G9 now". More details about GRID can be found at http://spandh.dcs.shef.ac.uk/gridcorpus/ or in this paper http://dx.doi.org/10.1121/1.2229005 The subset of GRID corpus here contains 360 randomly selected sentences from each of the 34 talkers. The corpus, together with transcriptions, is freely available for research use.