Wednesday, December 7, 2022

AI for Software Developers

 


When people hear the term “AI,” they often imagine a computer replacing a human, performing the same task but doing it better in some way: faster, cheaper, with higher quality, or all of these combined.

Some people embrace the possibility of computers freeing them from their mundane work, while others are skeptical. The latter may claim that machines are far from matching what humans can do. 

Questions like “How will you teach a computer to do this?” often carry the implication that you can’t. Here are a few examples of this sort of question that were raised in the past:

Computers already play Go and drive cars, so these questions are now outdated. This gives us reason to believe that questions of this nature that are still outstanding will also be answered eventually. Whatever professional area we take, computers are closer to matching human skills than most of us think. 

However, replacing a human is not always expedient. Instead of competing with humans, the developers of AI-based technologies may choose a different product strategy and attempt to use algorithms to augment programmers’ work and make them more productive.

In the software development context, we’re clearly seeing AI both performing human tasks and augmenting programmers’ work.

 

Copilot, though a breakthrough in programming-related AI, still is neither a revolution in the industry nor a replacement for human work. Keeping in mind that such a revolution may happen at some point, we still have to continue to improve existing software development processes. Helping programmers perform small tasks more efficiently is a vast area for AI usage.

Tools for software developers usually begin with strict rules (“heuristics”) and no AI under the hood. The rules grow more complex as new functionality is built into each tool. Eventually, it becomes nearly impossible for a human to comprehend everything and understand how to modify the tools. This is where AI can help.

 

The sum of all the minor AI-powered improvements to user productivity can result in an impressive overall boost. However, it does come at a cost.

AI-based systems work well in most cases, but there are some situations where they can provide weird results. Providing such results to the users costs us some of their trust. Each time we replace strict rules with an AI-powered decision-making system, we have to decide whether to make a tradeoff. We can improve our average decision quality, but we may lose some of the users’ trust. 

It would be nice to create flawless systems where trust will not be lost due to poor suggestions, but there are several obstacles to this. Many machine learning algorithms need example data for the training phase; the dataset quality is critical. We often already know what data we need to obtain, but obtaining it is either costly or illegal. 

Passing the Bar Exam101

  My instructor likened studying for the bar to training for some sort of epic athletic event. I distinctly remember his admonition: ...