A lot of systems nowadays don't have user interfaces. They are meant to be used by other systems alone. Somethimes they have a limited GUI for humans,
but are mainly ment for interaction from other systems.
This has been further developed by the concept of microservices spreading.
An API is like an opening into a system. It's like a port, or a delivery hatch, for other systems to use when communicating with this system.
Data can be retreived or delivered to an API, and it's up to the system to make sure the API works as intended.
A microservice is basically a stand-alone REST service with a number of endpoints.
Testing towards an API is generally very straight forward, and tests are relatively easy to maintain since API:s themselves are quite stable.
This section will mainly deal with REST services. The same principles apply for SOAP services, and to some extent for FTP based integrations.
Happy learning, or happy confirmation that you are already skilled.
In the society we tend to build systems to become as autonomous as possible, and not engage with humans unless really needed.
This means we have to integrate more and more IT systems with each other.
Over the years different approaches has been used for this due to different technological limitations (or, around the year 2000's the huge proportion of very junior IT staff).
Companies has used direct integrations between systems (creating SQL statements to remote system databases or ivoking stored procedures),
or been using export files placed on import file areas, or been using specific integration platforms like BizTalk - used as a bridge between systems to enable replacements of systems.
This eventually lead up to the SOA (Service Oriented Architecture) being implemented as a general approach in many organizations, with SOAP as the engineered approach to implement this.
The latest addition to this is the microservice architecture. It's basically the SOA perspective but taken to the extreme, and as a cross movement from the heavily structured SOAP architecture it mainly relies on REST services that conceptually share a lot of thoughts, but rely on more lightweight standards.
When it comes to working with APIs the distinction between communication protocol, data protocol and client properties become apparent. If you feel insecure about these, this is a good opportunity to re-visit the Network chapter.
The most common communication protocol by far currently is HTTP(S). This is the most common protocol for SOAP services, and almost without exception a part of the REST architecture. There are a lot of alternatives to HTTP(S) that sometimes pop up, like:
Regarding data formats the most common ones are by far the XML, JSON, and flat file formats.
REST (Representational State Transfer) typically consist of a web server that receives and send data using the HTTP protocol. Typical features include:
Comparing REST with Web Services the main difference is that REST is centered around data transfer while Web Services are centered around exposing methods to external systems.
Swagger files has become a popular way of describing REST services. So popular that many test automation tools make use of imported swagger files to ease testing.
A Swagger file describes the endpoints, how to use them, what options are available, the data formats used, and generally all information needed to use a REST interface.
It's considered good practice to make each test autonomous whenever possible. This include creating a new client for the API for each new test.
If this is not done there is a risk of dependencies of execution order since some tests might reconfigure the client, or crash the client.
If your tests rely on a specific order of execution running a sub-set of tests will introduce risks.
For the same reason it makes sense to have specific methods to prepare test data, and to call them to set up the test context.
In unit testing the test execution time can be an annoying factor. Unit tests should typically run fast and cover a lot. Preferably they should even run in parallel.
In system testing the execution time is less relevant since no-one is sitting still eagerly awaiting the test results.
Test execution time can still become a bottle-neck if the tests take too long time, but deal with that when it happens.
It's often a better idea to have them run as stable as possible since they take so much time to execute - making re-runs costly and annoying.
using TestingFramework.HttpClient; using MSTest.TestFramework; [TestClass] public class DemoTests{ [TestMethod] public void TestNumberOne(){ var dataFactory = new dataFactory(Users.SuperAdmin); var company = dataFactory.GetSuitableCompany(StandarCompany.positiveCashFlow().solidBoard()); var client = new HttpClient(); var response = client.SendGetRequest("https://myapplication.mycompany.com/clients/companies/" + company.CustomerNumber); Assert.IsTrue(response.Body()?.ToString().Contains(company.Name); } }
Almost all API tests are run as system test, and hence stable API test rely on good client management and resiliant test data strategies.
A lot of developers tend to refer to API testing as integration tests. This may be derived from the maven terminology, or from the code centric world view of developers.
API testing revolves around data. Hence a short walkthrough of test data approaches might be in place.
When testing a business flow through application(s) the data flows through a life cycle - any issue/errand is created, it is managed by different instances - and at the end the issue is closed.
This cycle can come very handy when doing test automation since it means we decouple from a lot of data dependencies of what data is in the database (meta data in the DB excluded).
For API testing this often mean that we use built in APIs to create the test data needed for our own tests - by POSTing data to the SUT and running commands to fast forward the data to the state we want it to have for our specific verification.
The drawback of this approach is that it's quite slow in execution and prone to become unstable if underlying mechanisms are changed. Hence it require a very structured API testing code base, based on SOLID principles and other good practice architecture.
Through SQL data can be injected directly into any SQL database (given you have sufficient user rights). This is a useful method to arrange test data for an API test. However, although this method is fast in execution it require quite a lot of knowledge to setup, and the more mature the system under test is, the fewer developers are on the team since the change rate decreases. This mean you may end up with tests depending on a database schema that noone is really knowledgeable about anymore.
In REST the data transfered is a programatic object. Transfering it back to an object often makes it easier to re-use in subsequent requests. This however require you to have access to the original code for ease of maintenance (white box testing).
If XML is used the entity data could be described by an XSLT document. There are tools to create classes from XSLT in many programming languages,
if reversed engineering is required.
For example there is:
Language | Example of tool |
---|---|
C#/.NET | XML.exe |
Java | xslt2java (apache) |
Javascript | Dynamic language. No need. |
Python | Dynamic language. No need. |
Using these tools will generate usable class structures based on the information given in the XSLT.
Some of these tools also can be used directly from XML.
SOAP is a standard that enforces correctness. It's been called over-engineered at times, but when correctly implemented it performs rather well - although with some overhead in communication compared with for example REST.
Broadly speaking SOAP adds transport control and rigid namespace mechanisms to XML structures.
Message Queueing is a transport mechanism that uses FIFO (First In, First Out) approach for messages rather than paralllel execution like in REST or SOAP. Generally speaking it can be thought of as a DB table of messages that are being distributed one at the time to anyone subscribing to that particular topic on a channel.
The message queue delivers messages. Most commonly the messages contain XML or JSON, but anything may be sent over MQ.
There are a wide range of different flavours of MQ (Active MQ, JMS, MS MQ among others). They work in similar ways, but are not always compatible with each other.
Test automation with MQ consist of injecting messages and verifying content. It's quite easy to retrieve a message. You just subscribe to the relevant channel, but injecting might be trickier. You could use to place your own messages on the queue, but depending on the security solutions involved sometimes it's easier to place messages on the outgoing tables on the upstream system and trigger any job that will pull this to the queue.
File based system interaction is quite common still. Automating against these type of interfaces is easy, you just prepare a file and place it on some input folder for the system under test. The tricky part is knowing when all the content of the file has been handled by the system under test.
For these types of system interactions a programatic approach is always needed.