I have worked on decoding of png images before. After reviewing the sample images, I find that they are all of same size, use same character definitions at 'similar' places & the images can be divided into zones.
The approach I had used earlier &which I propose now searches for a 100% match unlike conventional OCR/machine learning techniques which have accuracy number associated with it. I shall 'train' my solution by providing all possible inputs. For example, for the Y axis if I were to provide the definitions of the "$", "1", "2",...."0" from the sample images provided, then my solution shall scan the image and any 100% match shall be detected. This way we shall get the data recognised with location which can then be assimilated. Same holds good for the X axis labels, though we shall have to scan several images to get a "complete definition set". The X axis labels pose a problem due to orientation & also the year is mentioned separately on a different line, relative location also changes.
Key point: Approach needs validation for 100% match criteria. Willing to invest some time to test a case.
Shall use freely available lodePNG source code for decoding files, rest fully custom code.
Proposal: Budget and time to be re-assessed based on following clarifications:
What exactly is output format for each image: csv file? And its content: just Y & X axis labels, the red color scan point data? Please share sample file. Also, Please let me know how many total number of images