The work performed within the AI-PATH project is to develop tools that can quantify the number of lymphocyte cells that are located around the tumor cells in relation to the stroma around the tumor cells. It is well-known that if that ratio (named sTIL) is higher than a threshold, i.e. if there are enough lymphocytes in the vicinity of the tumor the chances of killing the tumor cell are higher. The problem in mathematical terms is a problem of detecting very objects in an image, similar to detecting cats or dogs in a picture. However these objects (lymphocytes and tumor cells) are very small and they all look alike to a non-expert's eye. Therefore we need to train intelligent algorithms to recognise the different types of cells. We use existing state-of-the-art algorithms and train them on publicly available datasets yielding a measure of accuracy (F1) of 65%. This was not good enough. What is the problem? An expert pathologists needs to tell the algorithm which cells are lymphocytes, which cells are tumor cells and which cells are none of the two. However a biopsy contains millions of cells and the pathologist does not have the time to indicate what each of these cells are. Therefore the algorithm has a lot of missing (or to some extend misleading) knowledge. In this project we worked on informing the algorithms about these misleading training data, which yield an accuracy measure of 80%, very close to the performance of a pathologist, but much faster and in a bigger scale. That is, while the pathologists has the ability to zoom in with his microscope to very few regions of the biopsy to find cells, our algorithm finds these cells everywhere in the biopsy making the final scoring of the sTILs more reliable. At the same time during this project we developed visualisation tools to qualitatively assess the output of the algorithm.