top of page

The Secret To Handling a Developer’s Responsibility For Building AI Programs


Artificial Intelligence (AI) has taken the driver’s seat in recent times. Some years ago, a futile attempt was made to bring AI to the mainstream with big promises; unfortunately, the ecosystem was not ready then. Today, we have the pre-requisites for AI to flourish to its maximum potential. Moreover, we desperately need AI-powered apps and tools.


The Coronavirus Pandemic has taught us that we need to have an assistive AI which will not only work closely with people and help reduce their efforts but also take the wheel in critical situations. In the last two years, most of the places were under lockdown due to the potential spread of Coronavirus, which was the right time for AI to step in and help manage the situation so that the economic cycle is not impacted that badly.



However, AI, as its name suggests, is artificially intelligent and not naturally intelligent. As it is being developed by humans; it can have a probability of human error. Let us see how this plays out. One such example is the classic case of the Uber self-driving car in which a pedestrian was killed. In this particular case, there was an actual person on the driver’s seat who was distracted by a mobile phone. Despite the company trying to take precautions, the accident occurred. As a result, the blame for the accident fell on the person in the driver’s seat, since the car was in a testing phase.


Similarly, if such cars roll out of the testing phase, where no one can take control; who will take the blame in case of mishaps? Whom will we blame - the algorithm developer, the car owner, car manufacturer, or perhaps the software developer who created the software for self-driving cars?

Robert C. Martin (Uncle Bob) said in one of his video lectures -

“A small bug in the code can have a

larger impact for which a developer is to be blamed”.

Software developed for self-driving cars, finance, military application, medical domain, etc. are extremely critical, stressing upon the fact that it ought to be bug-free. Likewise, AI is in the same league. Thus, AI being a self-learning system that can adapt itself to achieve goals, must be bug-free.


For the sake of explanation, let us take another example. Supervised Learning: wherein a training data set which is labeled, is given in order for the algorithm to learn and define an accuracy factor. Here, data cleansing and accuracy are important. Incorrect data leads to incorrect learning. Algorithm developers have the responsibility to ensure there is no flaw in the algorithm.


Ethics and moral values are abstract human concepts. A machine may not understand them. A simple program will run the code no matter what the code does. For instance, a hacking software can pull money from another bank account without knowing that it is morally wrong.


In such situations, the Developer should develop and train the software to follow ethics. The level of expertise of the Software Developer is carried forward in the software code. Hence, a developer who is developing the AI system should ensure that they have enough technical background before putting the system into production.


In other cases - where the software behaves exactly the way it is coded, or is a pure function of the input and output - it is unlikely that the software will do something which it is not coded for. In the case of AI, things are a bit tricky, we are training it and set a goal during and after the training is completed. In that case, the software can adapt itself to achieve the goal.


I briefly touched on the topic of Supervised Learning. Similarly, we could imagine the scenario of Un-Supervised Learning. AI is like a child which can be potentially spoiled under the wrong influence. A human being, having the right consciousness and empathy will think at least once before doing something unethical, but machines do not have any emotions. Therefore, if wrongly coded with AI algorithms, machines will simply execute the code.


In conclusion, to handle the responsibility of a developer working on AI, do remember the key difference between humans and software. Software can quickly create its copies and can easily travel across the globe over the internet. A human cannot. Thus, while it can take an evil person a lot of effort to influence others and pull them into their league, a bad software can scale itself instantly.


However, don’t lose hope just yet! As a suggestion to counterbalance this, developers need to think about how to create smarter software that has more generic rules and smart code, which is beyond the typical CRUD thought process - where the rules are un-altered by the Software or the data which it operates upon. It will increase the predictability of the behavior of the Software. Just some food for thought!


If you liked this article and want to find out more about we do, head over to our LinkedIn page or shoot us a message here

bottom of page