Alright, well it's been at least 3 months since I last posted. I have to admit I'm not very consistent. I've been switching from project to projects. Now, I also had COVID which really, really sucked and that blew the wind out of my sails. My previous work didn't really work out. Now, there are obviously, to me, Quest Sim by Facebook came out. What it is is a full body tracking solution for the Quest 2, but skimming the paper, I don't see any reason why it wouldn't work. They used a physics engine in it and I thought that was interesting enough. I know jack about physics so that went on the backburner but these are my thoughts for now. Anyway, I'm back on track. I'll try to update this post once a day.I'm playing around with stable diffusion. I like most. What I'm seeing right now on the internet is that a lot of people are doing prompt engineering and they do it mostly by taking a look at the input and taking a look at the output and that's it. Personally, I'm too lazy to do prompt engineering so I'm going to do EDA on the data set that stable diffusion trained on. So, LAION 400M. But, the metadata alone is fairly big. 50 GB. Before I do that, what I'm going to do is I'm going to do exploratory data analysis on another variant of stable diffusion which is called waifu diffusion, which is basically SD but trained on an extra 56,000 Z-anime image on top. The metadata for these images is a far more manageable 812 MB. Reason why I'm doing this is that it should allow for faster iteration and I think most of the code that works on that small data set, would also work on the bigger data set as long as I keep in mind what libraries I plan to use for the bigger yeah. So, yeah, that's what I'm going to do. I'll report back tomorrow. Take it easy.