No, this isn't a blog post written by ChatGPT or any other AI bots. This is more of a blog post about the realization and self-awareness of how we are similar to A.I. We had to be, of course; after all, the design of neural network is based on human brain.
Since I was 16, I had a keen interest in artificial intelligence and especially in the workings of neural networks. I was an well-attentive and particularly good student of Human Biology (GCSE) and the inner workings of neurons always awed me. And now to think that we are able to design the same neuron inside computers? Super awe!
So I would search the then dialup Internet for articles/tutorials on neural networks and how I can write one easily in Visual Basic (I was slightly proficient in that language at that time). But the thing about intricate topics such as neural network is that you really have to understand how it works if you even want to begin learning it. Just touching the surface is not usually enough.
I would go to my local library (British Council Library) and search for books remotely related to the topic of neural networks. I eventually got my hands on a bunch of machine learning and neuron design books that I brought home. And wow, those books were hard hard to understand. They were written in English but seemed Greek to me.
Around that time, I used to follow a guy who maintained a blog about building his own robot. I believe it was called Sluggish Soft in the UK but unfortunately it seems like the blog has been lost in the ether. The robot was called Rodney, named after Professor Rodney Brooks of MIT, and the author of the blog maintained a very detailed journal of his progress. If anyone reading this blog has any update on it, please do let me know! That blog was a huge inspiration to me as a teenager.
The blog had actual code (and also in Visual Basic) of implementing neural networks for his robot's brain. Now I was armed with the theoretical knowledge from the books and the actual code from the blog. So I invented this methodology of reading a chapter of the book over and over again until it clicked; even only a little. So chapters on backpropagation, activation function, etc. were all foreign to me but as I kept reading them over and over and over and over again, it started making sense little by little. So much so that I could actually go through some of the code the blogger wrote and understand them a little bit more. I figured that I am somehow understanding the concepts, even though abstractly, but understanding still.
I was able to extrapolate a concept just from my abstract understanding and majority of the time, my extrapolation was pretty accurate. It's like throwing a dart - you don't know the exact formula to hit the bullseye but with practice, your brain builds up this function that can accurately predict the outcome given all the variables (given years of practice). This is the same concept as unsupervised learning.
Do we know the formula? No, it's just a blackbox that we call experience. Accululation of trial and error and nudging the variables here and there until we achieve what we want. Just like the current generation of AI.
Do we know the formula? No, it's just a blackbox that we call experience.
The other day I was talking to ChatGPT and had this weird sensation that this "innocent being" was somewhat learning the exact way I did when I was a 16 year old - it's trying to make sense of what I am talking about by reiterating through the conversation (and million other conversations) over and over again. It's just estimating, after all, what I will be saying next. But this is exactly what we do as humans - we estimate others during interaction and adjust our behavior from the feedback.
It was an unsettling feeling that my learning experience is very akin to that of an AI.