Solution architecture for AI … and the meaning of design

I’m a solution architect who has been working with machine learning based systems for years. But more than an architect I consider myself a solution designer. That is exactly because of my experience with AI and all the misconceptions around the topic.

Let’s start with the first revelation: A lot of people talk about AI ethics and human-centric AI at fancy tech conferences… but in reality, when you are working on a concrete project or product that is using AI or combinatorial optimization components, things are quite different.

It’s so funny to me… so many people talk about how we have to teach non-technical users about AI but my perception is that the proper design of AI solutions is also a deeply misunderstood topic by people with a technical background... Yeah provocative statement but I mean it.

Here is my personal list of serious misconceptions I’ve encountered frequently that are concerning solution architecture for solutions with AI components:

  1. Design for AI… that is just a few colorful buttons on the UI, right?

I’ll explain in detail now why I consider them misconceptions — issues that usually hamper the development of actually valuable AI-based solutions.

1.Design for AI… aka a few colorful buttons on the UI, right?

To be honest I have encountered this issue a lot in my career so far. When I bring in a UI/UX designer early in the solution design process, I’ve seen a lot of software developers question that move harshly. I think it has a deep-rooted cause in how technical folks have worked with UI design for decades for web applications and mobile apps.

It always startles me because it’s one of the agile principles to deliver value to the user. But then good UI/UX design is still considered something that can be added in parallel but far detached from the software development or in the last phase of the project. But in my experience that puts the real UX design or any deep human 2 machine interaction design into a position that actually doesn’t give much room to really design the user experience with the AI system but merely a few beautification touches on the surface. A problem that potentially causes the loss of a lot of value of the system… and in the end the engineers are surprised and frustrated that the users don’t fully appreciate the value of the solution.

2. What does solution architecture have to do with design anyways?

Imagine asking an architect for physical buildings what his/her job has to do with design.

It sounds ridiculous, right?

Imagine claiming that the design of a chair only concerns the colors and decorations on it…

Yet when it comes to software, design is often considered entirely the realm of UI design. I believe that we have to understand solution architecture with an emphasis on aesthetic and functional design additionally to purely technical design… just like an architect for buildings would do.

Of course it’s important to note that I’m talking about solution architecture here. I think the situation is quite different for software architecture.

But the lack of the focus on aesthetic and functionality is especially true for AI systems. I’ve seen so many instances of researchers or engineers who are very distant from any real software user but make significant decisions on design choices early on that are not even acknowledged as this or considered untouchable. AI really needs proper human 2 machine interaction design. Far too often this design is carried out from the machine perspective, not the perspective of a human user. Simple examples are which sets of parameters can be adjusted by the user or what functionality the solution has to offer to complement the AI capability. This is exactly where this approach fails to deliver value to the customer. My observation is that AI systems usually don’t fail because of technicalities but because of the lack of strategic and thorough human interaction design.

3. Costs for developing the technical system interfaces are relatively high

Sure, a challenge that comes with AI components is that they usually bring most value in large, complex software systems with many components. It’s natural to look at the technical interfaces between components in concept architectures. It is an engineering feat to be proud of. It feels like something tangible, controllable. Basically it’s like calculating pricing by focusing on statics in a complex physical building. It is crucial as a backbone of the technical feasibility but in terms of evaluating the development effort for a software solution it usually ends up being a less significant factor relatively when looking at the breakdown.

In my experience, the multiple training iterations and functionality around the AI cost far more in terms of development effort than building the technical interfaces that are highly standardized in some domains.

4. Data is all you need for an AI system with high accuracy

Don’t get me wrong. Data is incredibly important for useful AI systems. By that I don’t only mean the data itself but also suitable sampling methods to remove certain biases… and all the intricate data cleansing that is necessary to properly work with the data. So many aspects of data quality are cruicial to train an accurate AI. I don’t think it’s unreasonable to say that it’s just as important as the machine learning pipeline itself.

But then again when I talk to colleagues with a technical background, I often observe that the data ingestion is far more carefully considered than the human 2 machine interaction. Which brings me to the second part of the issue - accuracy.

There are many methods for calculating the accuracy of the prediction of an AI system. As all modern AI systems are based on stochastic algorithms, there will be always an error rate. Accuracy measures are often used to compare machine learning models on a technical level. It’s a fatal mistake though to think that these measures correlate to value for the user. Interestingly my experience is that particular errors made by the AI can be far more crucial to the value to a user than the overall accuracy. Machine learning based models usually make mistakes that are very different from mistakes a human would make on the same task.

This is where proper human 2 machine interaction comes back into play. Sure you can try to blindly throw in more data in the training process or tweak the parameters but it doesn’t solve the core issue. I personally believe that we should strive for a design that enables the user to have a meaningful and comprehensive interaction with the AI to really teach the machine how to provide valuable predictions. Of course immediately an engieer would jump in at this point to say that many predictions are done in real-time or their sheer amount doesn’t allow user feedback. But see… this is exactly why a thoughtful design from the very beginning is key. There’s no quick fix for this from a purely technical perspective.

5. This is typical agile software development

Developing a solution with deep learning algorithms at its core is not your typical software development project. It might not apply to small scale machine learning projects but generally when working with deep learning I noticed that typical agile approaches don’t work without adjustment. I think it’s because agile software development is often conducted on systems that can be built in small, independent packages like mobile apps.

I’ve been working on a lot of applications within the deep tech realm or solutions that are large and complex in order to deliver any value to the user. Perhaps it’s also the scale and scientific approach that doesn’t quite fit naturally to some standards for agile project management. It’s important to use agile principles as much as possible but here it reaches the boundary of feasibility.

Some might argue that the role of a solution architect in itself does not fit to agile frameworks. But I think this is caused by the issue of disconnecting (agile!) design from the technical implementation. Design is often not considered part of product increments thus the definition of “developers” in SCRUM for example is narrowed down to engineers. Perhaps the role of a solution architect in an active project could be similar to a UX designer and an architect who keeps ownership of the evolving technical architecture. As I believe in the integral importance of human 2 machine interaction design I think it’s a serious mistake when designer roles are pushed entirely out of SCRUM teams and thus the regular agile events.

Please note that these misconceptions are based on my personal experience as a solution architect. There are probably far more. Also my observations are obviously biased towards the particular type of AI applications I’ve been working on. Please feel free to add more in the comments. I’m always happy to learn about other misconceptions.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store