Well, so much for daily updates,lol.
The elbow project really was a lot more effort than I anticipated . To make a long story short:
Just chasing down the dataset, preening it, proved to be quite a journey. I used the 3DPW dataset but before finding that, I looked into quite a few and discard those that for a reason or another, wouldn’t fit my purpose. It involved a lot of googling and MacGuyving.
As a framework for 3D point estimation, I chose VidePose3D because it seemed to be the most performant according to the literature. I got them to work after much fiddling around. I started training a simple multi-layer perceptron on ot. And…. it completely flopped. It overfit by quite a large margin.
1)I don’t have enough data. <-most likely explanation
2)I need a different archietcture.
3)I could do some more feature engineering, but I’m in a bit of a bind, I mean,
I figure its also going to prove quite a bit of engineering and a bit too specific for that project. e.g. someone pointed out that even if I got it working, I’d still have to deal with network latency. Also, the problem is that I’d have to really dig into the code of VideoPose3D to make it work. I’d also have to learn the ways Windows deal with Interprocess Communication — that I don’t relish, the only thing I use Windows for are video games.
so I decided to switch to something more familiar and am considering trying to build from the ground up instead of relying too much on github. Am I abandoning it? No.
I may revisit it as time allows, but I really have to think about marketable skills rather than specific fun skills to have. What I’m going to do is to develop a common set of tools I can use across domains and across datasets.
In spite of the setback, I did get to test drive Julia.
1)It is in general more amenable to functional programming. It has among other thing, a native pipe function.
2)The macro function it has is quite nice.
In Python, you have to write new code if you want to test different neural networks architecture (e.g. if you want to change the number of layers). If you want to do it in mass, that’s problematic. It’s the same thing in Julia, but you can write a quick macro to write code on the fly if you wish.
3)Code surrounding neural networks is in general cleaner, with much less boilerplate.
4) The cryptic error messages in general, but it may just be a matter of familiarization.
5) Interoperability IS excellent (I was able to load a pickle file from python without too much trouble), but the error messages from Python code within Julia are even more cryptic.Since it’s interop, I’m willing to cut it some slack: copy pasting from a Ipython notebook to a Julia notebook is none too straining on the mind or the wrist.
6) From a quick glance, there is a lack of libraries. e.g. with respect to transformers, but I wanted Julia mostly as something for custom mini-projects built from more or less from the ground up, so it’s not a big deal for me at the moment.
Anyways, goal :
Make a tool to visualize the weights and test drive it on several known “easy” then “hard” datasets.
Right now, I’m just looking at the end resutls which it is a pretty rough estimate
Goals for today: explore Julia’s plotting functions.