A quick look at Gen AI-enabled development
We are no longer privy to the newest wonders of technology, yes I am talking about Generative AI. Some ways or other tools like ChatGPT, Gemini, Galaxy AI, Copilot, and Claude have crept into our official work culture as “productivity enhancement tools”. No questions asked these tools do increase productivity. Some of my developers have reported that their productivity has enhanced by nearly 30%.
It might have similar effects on me as well, the percentage might even be higher. If I look at my productivity workflow, I have AnythingLLM installed on my machine, hooked to an OpenAI model deployed on Azure. Since I am working in the services industry, right now, this helps me manage multiple client contexts as workspaces and each workspace is broken down as threads that store any new development against that client. I can manage all the documents for that client embedded in that workspace and it helps me generate proper context and response. Truly Amazing! I also occasionally try and test open-source models from Hugging Face using LMStudio.
For development, I am leveraging the premium license of Cursor and I feel, that this setup has made me invincible! And why not, with years of development experience and systems understanding, the grunt work is taken away from me and I just have to tweak, adjust, and reap the benefits. In fact, with LLM’s potentially innovative ways, I come across some new approaches to problem-solving. For ex. — The other day I was asking Cursor to write me a program to upload a file to S3 and store the S3 path to the database.. pretty simple, right? Traditionally everyone thinks of writing a boto code, not the model. It wrote a code, that was Pythonic, elegant, and leveraged django-storage. Although it was something new for me, understanding what the model did for me, took me less than a minute.
This brings me to my next section — Understanding the code that a model spits out. As mentioned before, someone with experience will reap its full benefits. But for freshers or juniors who are in their learning trajectory, this can be a disaster. Many times, my juniors did not understand the piece of code that they had written, because it was straight out of the model’s mouth and it worked. These people got it working, and their “task” is done, they did not have to put in any effort to understand that piece of code.
Now people will ask, how is it different from Stackoverflow days? There is, hear me out. During Stackoverflow days, there were rare occasions when something worked right out of the box. To make a piece of code work, people had to adapt, change, tweak, and read documentation to achieve the desired output. In that process, they learned “something”, if not nothing. But, in this AI age, the tools are generally “right” and spit out code, catered to your context, which is all the more reason for people to not put in effort in understanding the output because their “delivery is done”. I do understand learning is driven more by personal motivation and as for corporates, it is all about “output”, and this blog is not to offend those who think likewise. But I feel a moral obligation, to put this out for everyone’s consideration.
I have first-hand faced these challenges, and I am sure many of the folks who are riding this AI wave might have similar observations. To cater to this, I am working on creating an approach that might be helpful to the young generation of coders and developers. Let’s understand this, very well, these Generative technologies are here to stay, and using them efficiently and effectively is the new game.
I will try to document these approaches in the form of a blog or a Github repository soon, keep checking for updates on my website.