Is Artificial Intelligence Ethical or Pragmatic? Thought-provoking debates

As artificial intelligence becomes a decision-maker, companies are compelled to provide clear answers not only to the question of “what should be done?” but also to “how should it be done?” If an AI replacing a customer representative makes a statement that is unethical but boosts sales, who bears the responsibility? Or what if an algorithm evaluating credit applications ends up reproducing societal inequalities? These examples place the business world at the heart of not only technological but also ethical decisions. At CBOT, having witnessed the AI transformation of hundreds of major institutions, we have clearly seen this: ethical and pragmatic approaches are not alternatives to each other, but two complementary necessities. In the rest of this article, we explain how this dilemma can be resolved, what kind of choices are made in which areas, and how CBOT has acted as a guide throughout the process.

The Tension Points of Ethics and Pragmatism

Every AI solution contains a system of preferences in the background. These preferences range from learning processes from data to user experience and business outcomes. But the core question here is: Which priority should take precedence over the other?

For instance, an AI used in a bank’s customer service may develop different communication strategies based on the customer profile. This personalization increases customer satisfaction. However, if this personalization pushes the boundaries of user data too far, where does the ethical line begin?

At CBOT, we always adopt the same approach in such scenarios: principle first, impact second. Because without principle, impact is short-lived. And this issue of principle is not limited to law alone. Corporate values, connection with society, and brand reputation are also natural components of this process.

A Confrontation of Decision Mechanisms: Ethical Code or KPI?

Today’s managers are not only governed by performance indicators but also by algorithmic behaviors. However, when algorithms are optimized for KPI targets, they may produce unwanted biases, exclusionary behaviors, and even discrimination.

For example, when an HR software recommends new hires based on past recruitment data, it may replicate gender or ethnic biases present in that data. In such a case, AI’s “accurate prediction” could result in outcomes that violate the principle of equality.

At CBOT, when designing such systems, we incorporate not only technical accuracy but also social responsibility criteria. For instance, when developing decision-support systems, we limit the system’s suggestions within a “field of observation” and leave the final decision to humans. This way, the machine’s suggestion is combined with human judgment.

Integrating “Ethics” into the Project File

An AI project may appear to be a technical endeavor, but it is, in fact, a strategic decision from start to finish. Just like budget or timeline planning, ethical principles should also be defined at the outset of the project.

At CBOT, we make a point of understanding a company’s ethical policies and developing solutions in alignment with them at the start of a project. We see this not merely as a tool for “compliance” but as a cornerstone for long-term trust.

Because without trust, the adoption of AI remains limited. And today, the greatest success of AI systems lies not in how accurately they work, but in how much trust they inspire.

What Should Companies Do as Regulations Catch Up?

The European Union’s AI Act and similar regulations are now making ethical sensitivity a legal obligation for companies. However, these regulations often lag behind the speed of companies. That’s why being proactive is essential.

At CBOT, we recommend “ethical radar” systems to organizations during this process. These systems continuously monitor the decision-making processes of AI and report potential deviations to managers. This way, ethical mistakes can be identified before they become crises.

This approach is especially crucial in the finance and public sectors. Because in these sectors, a single mistake can affect not just one customer, but an entire society.

In an era where AI is becoming central to corporate decision-making, the ethics-pragmatism dilemma has become one of the most critical strategic areas to manage. This dilemma is neither merely a risk area nor a luxury. On the contrary, it is the cornerstone of competitive advantage and sustainable success. At CBOT, we believe that AI systems operate not only with data but also with values. In other words, the path to success passes not only through the right models but also through the right principles. And this leads us to a clear conclusion: The question of “Ethical or pragmatic?” is outdated. The real question today is: How do we make the ethical choice pragmatic?