Using Newman for setting up mock data

newman
mocks
collections

#1

Hello everyone,

I recently used Postman’s CLI (Newman) in a more unusual setup, so I thought it might be interesting to share it with the community. In a nutshell, I used it to generate mock data inside a system deployed on Docker.

The background is that I have developed a small micro-service architecture system with the help of docker (you can find it on GitHub - the Postman stuff is in the /mock folder). I also wanted to include a small Mock data set, but I did not want to directly manipulate the database as it is usually done.

To achieve this, I generated a data set (/mock/data) with the help of Mockaroo; basically a few JSON files for each entity. Each JSON file contains an array of flat objects.

I then wrote a small NodeJS script (/mock/transform.js) for converting these files into a Postman environment file, which contains:

  • Some base URLs taken from the process environment.
  • The length of each data set (the number of records; e.g. users.length=100).
  • A flattened form of the data records, where basically each field is a separate variable (the name is generated from the data set, the index in the set and the field name; e.g. users.1.name=Serban).

Then I built a small Postman collection (/mock/qanda.postman_collection.json), which loops through all the data and saves all the generated IDs (the IDs from one request are sometimes needed in the next requests). I used the technique described in the Branching and Looping doc.

Lastly, I also created a small Docker file (/mock/Dockerfile) for running the collection together with the environment at startup. This allows developers to easily insert the data without caring about wiring the URLs or adjusting any variables (because all the networking is happening inside the Docker network, i.e. the hosts are stable).

Things to think about in the future: might make sense to remove the NodeJS script + manual Mockaroo data generation and instead directly call the Mockaroo APIs from Postman (this would allow e.g. a configurable amount of data to be generated; different data on each run, etc).

I hope this short insight is of value to the community :slight_smile: I have been using Postman for years and really enjoy it!