This is largely still a work in progress, however I so far have a functional transformer model that I can use for training langauge translation models. The transformer is based on the origional transformer architerture from the paper "attention is all you need". I coded it in python using the pytorch library and I am currenly testing it with the OPUS books database from hugging face.
Another description of the image on the left.
This is the second paragraph, but now the text is on the left and the image is on the right. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Description of the image on the right.
This is the third paragraph with the image on the left and the text on the right again. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin vel lacus vitae turpis volutpat tincidunt.
Another description of the image on the left.
This is the fourth paragraph, but again the text is on the left and the image is on the right. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Final description of the image on the right.