testAIng Solutions (tai) Launched Globally

testAIng Solutions (tai) Launched Globally

The world’s first AI-focused software testing services company

Jul 22, 2019

NOIDA, Uttar Pradesh, India
A passionate and innovative group of entrepreneurs – Viepul Kocher, Dr. Srinivas Padmanabhuni, Saurabh Bansal, Rakesh Sharma – from industry and academia got together to conceive, and create tai. Very briefly, tai means ‘testing AI’ and ‘testing with AI’.

On June 12, 2019, the founders announced the global launch of tai, which is the world’s first AI-focused testing services company. tai operates from India (NOIDA, Bangalore, and Jodhpur), Germany, and Canada.

On the occasion, Viepul Kocher, a parallel entrepreneur and tai’s Soul/Atma (आत्मा) said, “The trigger for starting tai was two-fold. One, was the lack of focus on testing AI products; while the other was the unfulfilled desire of most test professionals to use AI in traditional testing.” He also added that going forward, “AI will be a part of most products and unless we develop tools, techniques, and processes for testing AI, we won’t be contributing to safe AI usage.”

Mr. Viepul Kocher

Dr. Srinivas Padmanabhuni

Dr. Srinivas Padmanabhuni, tai’s Intellect/Buddhi (बुद्धि), while expounding the academic approach said, “tai combines the best of breed techniques from AI research, including metamorphic testing, generative models for testing, and adversarial testing.”

AI (Artificial Intelligence) is a disruptive technology affecting various industries such as healthcare, agriculture, manufacturing, call centers, energy & mining, and Intellectual Property (IP). AI usage is also a disruptive strategy in testing services.

AI applications have already become a part of our everyday life and our dependence on them is increasing exponentially. Traditionally, a lot of resources are used in testing an application or product before its release. However, there is a distinct lack of focus on testing AI components in applications.

tai addresses this lacuna. Its investments in researching AI testing have created tai’s core competence – skilled AI test professionals, AI-based tools and IP. In essence, tai is an end-to-end one-stop destination for all testing-related solutions for AI applications, products, and components.

tai serves the needs of AI application and system developers by:

  • Using new AI research, its AI-based IP and tools to test AI systems, applications, and components.
  • Using AI to manage the entire Test Life Cycle. For example, using AI to devise end-to-end test strategy; using AI to optimize conventional testing; and using AI to automate tests.
  • Using AI to prepare and pre-process data for AI applications.

Very specifically, tai’s bouquet of service offerings includes:

  • Consulting Services
  • Testing AI Apps
  • AI in Testing & Automation
  • Data Wrangling Services

tai’s unique value proposition allows customers to build and release quality AI applications with an improved time to market and reduced cost of quality.

tai is already working with a few customers; the list includes: MoMAGIC Technologies, MyGate, KritiKal Solutions, and Mobilous.

Here is what two of our customers say:

Shreyans Daga, Co-founder & Director of MyGate says, “Thanks for your help in testing our mobile application. tai has been a great partner in helping us achieve our quality goals.”

Dipinder Sekhon, Co-founder & CEO, KritiKal Solutions says, “tai transformed the way we looked at testing. In a short time, they have become more than a vendor; they have become our partner in delivering quality in our AI and non-AI products.”

For more information, please visit: www.testAIng.com.

Coverage by News Sites:

Interview with Rik Marselis

Interview with Rik Marseli

Testing AI with Quality

An interview with Rik Marselis, Digital Assurance & Testing Expert at Sogeti Labs, Trainer for TMap, ISTQB & TPI.

How do you see the role of Quality Characteristics (QCs) in AI systems? And, what are the Quality Characteristics specific to AI Systems that do not overlap traditional systems?

For every IT-system, quality is determined by many different characteristics. Often, we distinguish between “functional” and “non-functional”. These non-functionals are many, such as performance, usability, and security. All quality characteristics are just as relevant for systems that include AI as for other systems. But when I researched testing of AI-systems, I found that the well-known characteristics, for example, of the ISO 25010 standard, did not cover all relevant aspects. So, we added “Intelligent behavior” (which covers topics like ability to learn and transparency of choices), “Morality” (to view the ethical side of AI implementations), and “Personality” (that amongst other aspects looks at mood and humor).

As an AI tester, how can one ensure that above mentioned QCs are achieved while testing an AI application?

Of course, there are very many approaches and techniques to test for quality characteristics. I would like to go into details for one of them, a very important one. That is the sub-characteristic-transparency of choices, part of intelligent behavior. I’m convinced that in the near future under many situations, customers will ask their bank or insurance company, to explain why they made a specific decision (for example, if an insurance company doesn’t pay a claim). In that case, the explanation “Because our AI decided so” won’t do. So, companies will need “explainable AI”. This field is currently rapidly discovered, for example, by our own team, and the term “XAI” (eXplainableAI) pops up more and more often. Very briefly, XAI can be applied both after the AI has made the decision by tracing back for example based on a log. Or, it can be done up-front by adding some
functionality to the AI to make it show extra information that helps understand why decisions have been made. In the example of the insurance claim, the AI may give the information that the fact that an original receipt of purchase of the article was not included and therefore, it wasn’t paid. In which case the customer knows how he could solve it.

What is Cognitive QA and how does that help in testing?

Cognitive QA is a service of Sogeti that uses AI to support various testing tasks. For example, creating a real-time dashboard for which the AI gathers data from various test management tools to compile a concise overview of the current status of quality, risks and progress. Another example is to evaluate a huge test-set and decide which test cases are relevant for regression testing and which test cases can be skipped.

Since AI is also used in the testing lifecycle, are there any risks that can come with the usage of AI in testing?

Of course, AI brings risks just like any other tool that is used to support testing tasks. Currently, a major risk is too high expectations. People misunderstand the meaning of the words artificial intelligence and think that it will be magic. But, in general, it is not much more than interpreting huge amounts of data and based on that come to conclusions. If the wrong training data is used or the wrong goals are set, then AI will not fulfil the expectations. So, like with any testing tool, it starts with defining the objectives and then carefully finds a tool that is capable of reaching those objectives.

Why is it so, that the release cycle in ‘Digital Testing’ is shorter than that in ‘Continuous Testing’? And, how do we ensure QA with such shorter release cycles?

Thanks for this question. You have obviously read my book “Testing in the digital age; AI makes the difference”. In our book, we state that continuous testing takes less than days and digital testing takes less than minutes. Our experience is that when people use continuous testing in their DevOps pipeline, they often include a traditional regression test that often slows down the deployment process because the regression test still takes hours. In digital testing, there are two developments that make it possible to significantly shorten the duration of the test

First, we use AI to make testing more efficient, for example, by deciding per run which test cases may be skipped (for example, if a low-risk area was not changed, only a small subset of the test cases is run). Secondly, we really believe in Quality Forecasting. This means that we use AI to predict the evolution of the quality level of a system. This can be done by using data from previous test cycles together with data from monitoring the live operation of a system. If the AI forecasts a decrease in quality, the team can already take measures before any customer notices a problem.

What approaches and/or testing methodologies are used when the outcomes/oracles are not known while testing AI applications?

Indeed, a big problem in testing AI systems, specifically for continuously learning AI, is that a correct answer today may differ from a correct answer tomorrow. We have described several approaches, I’ll explain two. Tolerance is the first. This means we define boundaries between which an answer should be. Input is another. Traditionally, testers focus on the output of systems. But since machine learning algorithms change their behaviour based on the input, testers also should have a look at the input. Of course, testers can’t sit next to the system all day and watch the input. However, testers can contribute to creating input-filters that ensure that AI only gets relevant and good input.

Where do you see ‘future testing’ heading?

Testers today already need to be able to use test tools. The more AItools become available, the more testers will also need to be capable of testing using these tools. The efficiency and effectiveness of testing will be further improved. But still some manual exploratory testing will always remain needed. Also, testers need to have a general understanding of machine learning and its pitfalls. Just like for other technologies, testers can only come to a solid assessment of the quality of a system if they understand what kind of aspects of quality are relevant. Therefore, I think the quality characteristics, both existing and specifically added for AI, as discussed earlier in this interview, are crucial for testers to make a well-founded judgement of the quality of AIpowered systems.

What resources do you refer for testing in general and for testing AI?

For me, both ISTQB (www.istqb. org) and TMap (www.tmap.net) are a good starting point. For both many books and other materials have been published. Currently, I’m working on a new book in the TMap series and based on that we’ll also do a complete overhaul of the tmap.net website early next year. Further, I like to visit testing conferences because that’s where you meet people that are working on the latest innovations in the testing profession, and they are eager to share their visions and experiences.

What was your experience being engaged in the syllabus writing for AiU?

To me, it was a pleasure to contribute to the Ai United syllabus. People that know me have seen that I’m also willing to help other testers improve their knowledge and skills. So, when this opportunity to spread knowledge on AI testing came by, I was very glad to take it. And I really like the result. I’m just about to get the results of the pilot-courses, and I’m curious where the syllabus can be further improved before it’s quickly brought live.

What do you like doing in your free time?

Many people have seen me walking around at testing conferences carrying my Canon SLR with a huge Tamron 16-300 zoom lens. So yes, that gives away that, besides testing, my other hobby is photography. And it’s not only at conferences of course. Actually, during vacations, my wife (who is also a keen photographer) and I make many pictures and my wife always creates very nice books as a memory about the great road-trips.

Interview with Ai-United

Interview with Ai-United

What is Ai-United?

AiU or Artificial Intelligence United (www.ai-united.org) is a group of international experts who are working to create certification standards in the area of Artificial Intelligence.

What kind of Trainings and Certification you mainly focus on?

The 1st training of the AiU is Certified Tester in AI (CTAI), which focuses mainly on inherent challenges and evolving roles of a tester with AI projects. There will be further courses, so please stay tuned for the growing roadmap.

Tell us something more about the certification, pre-requisites, outcomes and industry SIG involved in it.

In general, there are no mandatory requirements for AiU-CTAI; however, in order to get the most out of the training, we recommend
some experience in software testing and/or development, and highly recommend completing the ISTQB Certified Tester Foundation Level certification before joining this course. Basic knowledge of any programming language – Java/ Python/C++ as well as a general understanding of statistics will also benefit you throughout the course.

Apart from being the first what else do you think are unique things about this certification?

AiU – Certified Tester in AI is focused on the role of the tester and has been created by experts from both the software testing and AI fields, who came together to come up with something that fits the demands of the global AI community. It has been reviewed by experts from various fields who are interested in AI over 30 countries across 5 continents who have provided important
feedback. This is why we can proudly announce that this is the first global certification scheme of its kind, supporting the quality of testing in AI projects.

What is the vision and mission of this organization?

Artificial intelligence (AI) was founded as an academic discipline in 1956. It is only in recent years that AI and its constituent technology of Machine Learning (ML) have emerged as commonplace in business, which are in turn becoming integral parts of many IT projects. There are endless possibilities and uses for AI and ML which can bring incredible benefits; however, as with many new technologies, it is important that we understand them well so we can better consider potential ethical and negative implications. This is why, at AiU, we believe that the most important thing is knowledge. The mission of the organization is to enable comprehensive dissemination and evangelization of AI knowledge.

Who is partnering with you?

We are working with international experts to create content and have a review committee of 43 professionals from many relevant disciplines in nearly 30 countries, listed on the website.

What are the recognitions to the training providers?

The recognized training providers are listed in the AiU website. There is a lot of interest from the community for further recognition of future training providers; however, we are waiting for the finalization of the syllabus, which will be completed very soon, so everyone can see the official final version before further recognitions are finalized.

Why getting trained on AI skills is important?

AI can be a terrifying topic for many people. It’s easy to see the many positive advantages, but at the same time, be afraid of some of the possible misuse scenarios and negative implications. Due to these reasons, Artificial Intelligence United finds it so important to set quality standards in the AI field as soon as possible. It is required by society to open the eyes of as many people as possible to the critical thinking skills that are required, especially in projects using AI fundamentals, as these incredible advantages can come with potential risks which need to be properly taken into account while building these systems. For the first course AiU CTAI, when focussing on the role of the tester, the tester has the ability to not only verify AI systems, but also consider how potential AI risks can be mitigated.

Why did you think standards are required in the area of Artificial Intelligence?

I believe that there is a undeniable need for setting standards in AI as it is beginning to have implications in just about every area of our lives. There are situations why we can’t even imagine today which will become reality over the coming years and even months. Such proliferation necessitates development of cost effective tools and products interoperating with each other seamlessly while assuring confidence in functionality. This inevitably points to need for standards.