WebComponent Vs Angular Formly – Issues in Displaying Forms in Firefox and Safari Browsers

I would like to share a quite interesting stuff with you pertaining to Angular Formly. We at Vmoksha Technologies have set our goal to write optimized code lines with quality. To attain our goal, we as a team, lean towards using new techniques that are contemporary to today’s software development.

One such framework is Angular.js, a quite interesting framework, which we have used in our recent web projects. As an extension to it, we went ahead exploring the feature that lets you generate the HTML forms – Angular-Formly.


Smooth Sail, until you hit rock bottom!

Yes. During the testing phase with multiple browsers and their versions, our quality control team figured out certain browsers such as FIREFOX and SAFARI, which won’t display the HTML forms generated by Angular-Formly. We could have just ignored this and continued with our efforts putting up a disclaimer stating that the project is supported by certain browsers and versions.

But, that doesn’t support the objective of the project. In Vmoksha, “Failure is not an option” (NASA’s mission statement) is the mantra, and we wanted to get this resolved. Questions raised, discussions held to figure out the root cause of the issue. As always, we approached Guru Google as well as developers in multiple forums, for a solution. But, Lady Luck didn’t turn her face towards us for more than a week.


Whether the culprit is Angular-Formly or Browsers in scope?

With no potential hint about the root cause as well as a solution, we could think of one possibility, and that is, “Resolve it yourself”;


Yes, we found it, fixed it and tested it. FIREFOX, SAFARI – OK.

I could hear your mind; tell us the secret, buddy. I know you are thinking about -

What was the root cause? How did you resolve the issue?

Read on, as I keep explaining the data points behind using Angular-Formly, identifying the root cause and providing the fix.


Why we used Angular-Formly?

Even an expert HTML developer would agree that the process of repeating the same block of html code is really frustrating and non-contemporary.

“Angular-Formly is a JavaScript Powered forms for AngularJS; it lets you generate html forms automatically without much of an effort.”

Angular-Formly does just that, it reduces the effort of writing HTML forms and delivers it the way we want it. Customising Formly might seem difficult, but once you achieve it, you can reuse as long as you wish to use it. It does take few parameters and draws the HTML form for you on screen.


How did we resolve the browser issue?

The below images depict the scenarios of our implementation of Angular-Formly forms in different browsers:

In Chrome:

Angular formly

In Firefox and Safari:

Angular formly

Our approach to resolving the issue kick-started with the following questions:

  1. Is it the CSS that we are using in the project?
  2. Could it be a problem with the version of Angular-Formly used in the project?
  3. Maybe Angular-Formly doesn’t support FireFox and Safari. Did we check it?
  4. An overlap of a JS or CSS is possible. Who knows?

The last question ignited a thought in our minds. Eventually, we nailed down the root cause of the issue as we kept analysing each JS file referred to the project. We found something striking – WebComponents.js and ran the project excluding the component. To our astonishment, the Angular forms displayed seamlessly in all browsers including FIREFOX and SAFARI. So, we extended our research on the use of the component, its source, and impact.


Root Cause of the issue

In our project, we have a placeholder to show maps, and for that very reason, we had Google-Map bower component installed with a list of dependencies.


Subsequently, the Polymer dependency bower components got installed, and one such dependency file is the “WebComponents.js” [an optional dependency item]

  	"type": "git",              
        "url": "https://github.com/Polymer/polymer.git"   
        "web-component-tester": "*",                         
        "iron-component-page": "polymerElements/iron-component-page#^1.1.6"          


About Webcomponent.js

WebComponents.js is a set of polyfills that is built on top of the Web Components specifications. Web components assist you in creating your own custom HTML elements. Instead of loading your sites with verbose mark-ups, repetitive code, and long scripts you wrap up everything into neat little custom HTML elements.


Final Fix

Note: The WebComponents.js polyfill layer is no longer needed for browsers that fully implement the Web
Components APIs, such as Chrome 36+

So, we excluded the WebComponent.js script from the project. Since then the Angular-Formly form is working seamlessly in all the modern browsers.

Hope, this write up helped you to learn something out of our experience and also, to resolve the issue.

Thanks, for reading our blog. Watch this space as we continue our journey in building robust applications with an objective and quality.

Virtual Hosting Using Nginx Server

Nginx is a web server, which can also act as a reverse proxy, HTTP server, IMAP/POP3 proxy server as well as a load balancer. It is well known for its stability, high performance, simple configuration, rich feature set, and low resource consumption. So, we can deploy our web applications like HTML pages and PHP-related applications into this server directly.

Let’s see How to Configure the Nginx as a Reverse Proxy/Virtual Hosting Purpose

#1. Deploy the nginx application in any server (I am taking Ubuntu System).

#2. Choose any domain/sub-domain name, and do the C-name configuring that domain name to nginx server (Ubuntu System Port 80).

Note: Port 80 is the default port for nginx. If you change the port, you need to map the C-name according to that.

#3. Once C-name and nginx applications are ready, create a conf.d folder inside the nginx.

#4. Create a configuration file with the name of domain/sub-domain along with the .conf extension.

For example, if you want the application should work on ‘abc.mycompany.com,’ you have to create a configuration file with the name of ‘abc.mycompany.com.conf,’ and copy the below-given code and save the file.

   server {

      listen 80;

      server_name abc.mycompany.com;

   location / {


      proxy_http_version 1.1;

      proxy_read_timeout 300000;

      proxy_set_header Upgrade $http_upgrade;

      proxy_set_header Connection 'upgrade';

      proxy_set_header Host $host;

      proxy_cache_bypass $http_upgrade;



#5. Restart/reload the Nginx.

Now your application will work with the domain name based on your configuration.



Listen – Nginx port listener

Server_name – Domain name

Proxy_pass – Actual running application URL (domain name indirectly calls this URL)

Proxy_read_timeout – For long connection establishment (optional)

Nginx default connection timeout – 600 m.sec

Setting up a Secure Email Engine using Amazon SES

Cloud computing, also known as on-the-line computing, is a kind of Internet-based computing that provides shared processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., networks, storage, applications, servers, and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide enterprises and users with various capabilities to store and process their data in third-party data centers. It relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over a network.

Amazon Web Services (AWS), a subsidiary of Amazon.com, which offers a suite of cloud computing services that make up an on-demand cloud computing platform.  The scope of this blog is confined to one of the efficient and effective services which are a part of AWS – Amazon SES.

Amazon SES is a pay-per-use email distribution engine that provides AWS users with an easy, authentic, cost-effective, reliable and consistent infrastructure for sending and receiving bulk email correspondence using your domain and email addresses. 

Amazon SES

Why Vmoksha opts for Amazon SES?

Amazon SES works with Elastic Compute Cloud also known as “EC2,” Lambda, Elastic Beanstalk and various other services. It is available in different regions such as US-East, US-West, and EU-Ireland, which allow consumers close to these regions to deploy their applications to ensure high availability and low latency.

Unlike other SMTP players in the market, Amazon SES provides competitive pricing and deliverability.

Listed below are certain benefits of using Amazon SES:

  1. Trusted by Internet Service Providers (ISP) as an authentic source
  2. Cost-Effective & Competitive Pay-per-use pricing
  3. Reliability and Scalability
  4. Bulk Messaging Engine
  5. Automation using Amazon Lambda functions
  6. Ensure deliverability and Active monitoring to make sure that the illegal or questionable content is not being distributed
  7. No Infrastructure challenges
  8. Provides mailbox simulator application as a testing environment
  9. Real-time notifications via Amazon SNS.

How Vmoksha make use of Amazon SES?

The Amazon SES service along with Amazon Lambda service is configured for sending emails automatically. The mail sent via SES is verified by ISP and mail service provider such as Google and finally delivered to the employee(s). To ensure the smooth delivery of the mail, Vmoksha undergoes certain workarounds, which are described in the following sections.

The following diagram explains the scenario

Amazon SES

Setting up Amazon Simple Email Service (SES):

First, set-up Amazon Web Services (AWS) account to use this service

After signing up to the AWS account, log-in into the management console and look for SES under services section or log-in with the URL, http://aws.amazon.com/ses


Steps to verify Email Addresses and Domain:

   I.  Steps to Configure Amazon SES

Goto SES home page, navigate to Identity management menu and choose your option to verify either your email domain or list of addresses.

For example;

Email addresses – sales@abc.com, finance@abc.com and so on…

Domain – abc.com

The verification is managed using the Amazon SES console or Amazon SES API.

Note: Email address and domain verification status for each AWS region is separate.

Although, Email Addresses verification is quite an easy step, completed by opening the verification URL sent by SES. Domain verification demands the following steps,

    1. Go to Domains under Identity Management, select Verify a New Domain.
    2. Enter the domain name and select Generate DKIM settings and Click Verify This Domain.
    3. List of DNS record details will be displayed, which needs to be added in the DNS Zone Files of your domain. Eg. Godaddy DNS management
    4. Download the csv file of DNS Records. This contains the details of Text (TXT), Canonical Name (CNAME), and Mail Exchange (MX) records that need to be added or amended in DNS records.
    5. Domain verification can be done by just adding a text (TXT) record in your DNS Zone File. But, it is highly recommended to perform DKIM verification.
    6. TXT Records looks similar to this,


_amazonses.abc.com         TXT     pmBGN/7MjnfhTKUZ06Enqq1PeGUaOkw8lGhcfwefcHU=


  1. On propagating TXT record in domain, the domain verification status changes to verified
  2. To ensure that the mail is from a trusted source, DKIM verification is required. DKIM verification can be done by adding CNAME records in DNS Control Panel.
  3. Once DNS changes are reflected, the domain is fully verified.

Email Authentication via SPF or DKIM:

Amazon SES uses Simple Mail Transfer Protocol (SMTP) to send an email. Since SMTP does not provide authentication by itself, spammers can send messages pretending to be from the actual sender or domain. Most of the ISPs evaluate the email traffic to check if the emails are legitimate.


Authentication Mechanisms:

There are two authentication mechanisms used by ISPs commonly:

  1. Email Authentication with SPF (Sender Policy Framework)
  2. Email Authentication with DKIM (DomainKeys Identified Mail)


Email Authentication with SPF:

Setting up SPF Records and Generating SMTP credentials:

A Sender Policy Framework (SPF) Record indicates to ISPs that you have authorized Amazon SES to send mail for your domain. SPF Record looks similar to this,

abc.com       SPF           “v=spf1 include:amazonses.com -all”


SMTP Credentials can be generated from SES management console under Email Sending section. It prompts to create an IAM user and provides SMTP username and password upon creation of that IAM user. Another alternative way is to create a separate IAM user with access to SES service using access key and secret key as SMTP credentials.


If SPF Record already exists, then, you can append “include:amazonses.com” to the existing record. Also to work with Google apps, you need to add “include:_spf.google.com ~all”

If SPF record does not exist in the DNS Zone File, text (TXT) record can be added with the value as “v=spf1 include:amazonses.com -all.”


Email Authentication with DKIM:

DKIM (DomainKeys Identified Mail) is a standard that allows senders to sign their email messages & ISPs and use those signatures to verify whether that messages are legitimate and cannot be modified by a third party in transit. DKIM setup can be done by adding CNAME records provided by Amazon SES in DNS Zone File.

Here are the samples of CNAME records for DKIM Verification,

mvkw7orpsecw2._domainkey.abc.com  CNAME  mvkw7orpsecw2.dkim.amazonses.com
jp5x3nni3zf4uo6._domainkey.abc.com CNAME  jp5x3nni3zf4uo6.dkim.amazonses.com
7i3j33udxinbhjf6._domainkey.abc.com  CNAME 7i3j33udxinbhjf6.dkim.amazonses.com


Finally, now it’s time to leave all SMTP servers and move on to AWS Simple Email Service (SES). This way Amazon Web Services reduces the effort of DevOps and takes IT Revolution to the next level.

Useful Links:


Defect life cycle, a.k.a Bug life cycle is the journey of a defect cycle from its identification to its closure. A defect undergoes different states during its lifetime. But before going deep about the defect life cycle phases, it is important to know few fundamentals.

Error – Defect – Failure

Finding flaws in software has never been easy. Rather, it has always appeared to be challenging for the entire team, who are working on its successful completion. The words Effect, Defect & Failure seem to be same, but their meaning varies depending on the context/situation. Error leads to Defect, which further leads to Failure. It is a chained process which has to be rectified in a little while to avoid business impacts.

Defect Life Cycle


The term ‘Error’ means human-made mistake/misconception related to design or a deviation from the actual business requirement. If the authorized person gathers client requirement erroneously, it is referred as Error.


The error in coding or logic part is referred as Defect/Fault/Bug. If the development team coded the mistakenly gathered requirement, it results in a fault.


Failure means any deviation from the desired result. The fault made in coding leads to unexpected results that are different from the end user expectation. In that case, we say the project landed in ‘FAILURE.’ 

Defect Life Cycle

Defect life cycle has many stages from open/new until closed or re-opened and it varies from project to project.

Defect Life Cycle

It looks arduous, but if you consider these significant steps, it is quick and easy to find and eliminate a bug/defect. The whole process is explained with different scenarios:


Scenario One

             NEW                                                                                              ASSIGNED


The moment a Test Engineer finds a bug, he should raise the defect with the status ‘Open/New.’ The development team will validate the defect and assign it to a developer changing the status to ‘Assigned.’ The developer will fix the issue and changes the status to ‘Fixed.’ The test engineer performs testing, and if the issue is resolved, he will change the status to ‘Close’ otherwise to ‘Re-open.’

Scenario Two

             NEW                                                                                           DUPLICATE


Sometimes the defect status is mentioned as ‘Duplicate.’ Duplicate defect means the same issue is raised by person A and person B.

NOTE- There is a contrary to this scenario. If a defect has been raised and closed in the past and if the same defect arises in the future, then it is called as ‘New Defect.’

Scenario Three

             NEW                                                                                              INVALID


Invalid/Rejected are nearly similar words. If the developer team finds that when a defect raised by test engineer is invalid, then the developer will change the status as ‘Reject.’

Scenario Four

             NEW                                                                                             PCR/RFE


Product change request or Request for enhancement is considered when there is a need for enhancement type. This may not be determined as defect. For example, let’s consider Gmail application. It has multiple features, and one of that is deleting multiple emails at a time. If this feature is not there, the test engineer should raise a request for enhancement not as a defect.

Scenario Five

             NEW                                                                                          POSTPONED


If a defect is decided to be fixed in the next release, it is stated as ‘Postponed/Deferred.’ The reason to postpone the bug may be like low priority bug, lack of time or the bug may not have a major impact on the software.

Scenario Six

             NEW                                                                                           CANNOT BE FIXED


The possible reason for this situation to arise is more of technology related. Each and every language (JAVA, C, C++, .NET……etc) has their own limitation. So, due to these limitations, this kind of scenario may arise. The other reason may be, the cost of fixing the defect is more than moving with the defect.


Defect tracking and management are the important aspects of Testing/Development. If dealt properly within time, it saves a lot of time and also increases the productivity.

Installation, Environment Setup, and Adding Proxy to npm and Node.js Packages

Node.js is a server side platform, built on Google Chrome’s JavaScript runtime, which helps to build scalable network applications quickly and efficiently. As Node.js uses an event-driven, non-blocking I/O model, it is perfect for data-intensive real-time applications that run across distributed devices.


Image Reference: www.tutorialspoint.com

Node.js Installation & Environmental Variable Setup 

  1. Download the latest Node.js Windows installer from here
  2. Run the file by following prompts in the installer and complete the installation
  3. Make sure the installation files are copied under C:\Program Files\nodejs (or) C:\Program Files (*86)\nodejs
  4. Now open the system properties (enter sysdm.cpl in command prompt), and click Advanced tab.
  5. Click Environmental Variables. A pop-up window will open displaying Path under System Variables. Check whether the path is determined as C:\Program Files\nodejs (or) C:\Program Files (*86)\nodejs. If not, append the path manually by clicking Edit

node.js installation

Verify Installation

  • Open a text editor in any file location and save it as “main.js” file. The file should have the following content.
/* Hello, World! program in node.js */
console.log("Hello, World!")
  • Now open a command prompt and navigate to the folder that has main.js and execute the js file using node main.js command.
  • If you have done the process correctly, the final result will be displayed as
Hello, World!
  •  You can also check the node version using the command node -v


Installing Packages using ‘node package manager’ (npm)

There is a simple syntax to install any Node.js module:

$ npm install <Package Name>

For example, the following command installs a famous Node.js web framework module, called Gulp:

$ npm install gulp

Installing the npm packages globally

To download packages globally, simply use the command npm install -g <package name>

Note: Some of the npm package dependency files will be downloaded from git repositories. Hence, the Git Bash Installation is required. You can download and install the Git Bash from here.


Setup Node.Js and npm Behind a Corporate Web Proxy Server

  • The Systems that are all present inside the corporate proxy requires npm proxy setup separately.
  • npm uses a configuration file to apply the proxy setting, which can be modified through command prompt.
npm config set proxy http://proxy.company.com:8080
npm config set https-proxy http://proxy.company.com:8080

  • Run the commands in command prompt one by one.


Running a sample project using npm

  • Let’s assume that your project is located in F: drive
  • Open command prompt and navigate to F:\Sample project (project folder)
  • Now verify the version of the node by using a command node -v in command prompt. If the version is outdated, it can be updated using the command npm install –g
  • After updation of npm, the project requires bower package and its dependency files have to be installed (Bower can be installed using bower install -g)  
  • After the bower installation, you can run the project using gulp serve-dev for development mode and gulp serve-build for production mode.
  • After running the command, web browser will be prompted automatically with http://localhost:3000


Deploy Applications to AWS Cloud in a Jiffy!

Amazon Web Services (AWS) is the most favored cloud provider for Dot Net developers because of its flexibility, scalability, and reliability. It is a cost-effective computing resource solution that is designed to assist application developers to host their applications quickly and securely. AWS helps businesses to reduce capital expenses and administrative costs and retains the security, performance, and reliability requirements your business demands.

Here, I have provided a detailed step by step procedure for deploying your applications to AWS cloud. Before moving to the procedure, check out the basic prerequisites.


  • Add AWS SDK Package to Visual Studio. (To explain, here I have used Visual Studio community edition. This is the most preferred approach for the first time deployment users).
  • To add AWS SDK Package, click here and download the AWS Toolkit for Visual Studio.

Steps to Deploy

#1. Open your application using Visual Studio. Click Build from the menu and select Publish to AWS.

AWS Deployment

#2. Once you click on the Publish to AWS, a popup screen will open as shown below.

AWS Deployment

#3. Select the account profile if it is already created or create a new profile. To create a new profile, click on the Profile icon. A popup screen will display as shown below.

AWS Deployment

#4. You can get the account ID and the Key information from AWS management console. Click Account Identifiers to get account ID and Access Keys to get the Key information. Fill in all the fields for profile creation and click OK.

AWS Deployment

#5. If you are deploying first time, chooseCreate a new environment under deployment target. Also, choose your required Region for deployment. Click Next to proceed further.

AWS Deployment

#6. Select the environment tag and enter the details to create a new environment. Check the availability of your URL, and if it exists, modify URL to a different name. Click Next to proceed.

AWS Deployment

#7. Configure the environment as per the requirement and click Next.

AWS Deployment

#8. Select service roles granting permissions to your application and click Next.

AWS Deployment

#9. Specify the build configuration, app pool runtime, and app path.

AWS Deployment

AWS Deployment

#10. Review the environment configuration and click Deploy to start the deployment.

AWS Deployment

#11. The deployment event details will be shown in your visual studio window as shown below.

AWS Deployment

Now click on the URL shown after the successful deployment.


Migrating applications to AWS cloud is simple and fast as mentioned above. Whether it is an existing application or a new SaaS-based application, AWS eases the migration process and helps dramatically increase both the effectiveness and efficiency of your business process.


From Fiction to Reality – The Evolution of Beacon Technology

Technology is advancing at a rapid pace, gradually turning every science fiction into reality. Beacon technology is one of those technological advancements, which was once a figment of the imagination of many researchers. Let’s have a look at how beacon technology is evolving and changing our business world.

What is Beacon?

A Beacon is a tiny, battery-powered, wireless, low-cost sensor with a built-in Bluetooth chip device that works on Bluetooth Low Energy (BLE). It allows bluetooth enabled devices to receive data within short distances.

Beacon technology

Image Source: http://www.empresariocapital.com/files/6214/3292/6082/beacons1.jpg

Beacon device is designed in such a way that it is easy to fix anywhere and can be efficiently used by everyone. It continuously broadcasts a radio signal, and when a device receives this signal it reads the beacon’s ID and triggers the action in the smartphone app based on the proximity of the beacon. What makes a beacon technology different is its ability to “wake up” an app, which is not open but has been downloaded on the Smartphone.

Deep Dive into Beacon Technology

Most beacons use BLE technology / Bluetooth Smart Technology as it requires low energy consumption and low implementation cost. The technology only allows for small amounts of data transmission, and it is the reason most beacons only transmit their IDs.

Beacon IDs consists of three values:

  • Universally unique identifier (UUID)
  • Major value
  • Minor value

The purpose of transmitting the ID is to distinguish a beacon from all other beacons in a network. Major and Minor values are the integer values assigned to the beacon, for greater accuracy in identification. Beacon also carries information about its signal power to determine the proximity of the source.


iBeacon is a brand name created by Apple Inc., which was first presented at the World Wide Developer’s Conference in 2014 as part of Apple’s iOS 7. It is a technology innovation of Apple that has been implemented in the location framework in iOS 7 and newer operating systems. As described above, iBeacon uses BLE technology to sense proximity and transmit a UUID, which triggers an action in a compatible app or operating system.

Eddystone – A Game Changer

In response to iBeacons, Google came up with its beacon project called Eddystone on July 14, 2015, with more open and flexible approach. Eddystone is Google’s open-source, cross-platform BLE beacon format. While Apple’s iBeacon only works with iOS devices, Eddystone works with both Android and iOS devices. Unlike iBeacons, they broadcast not only their UUID but also pre-programmed web page URLs and thus don’t require the installation of specific apps. The URL could be a regular web page providing relevant information, for e.g. a beacon next to a restaurant can broadcast a link to a YouTube clip or their specialty menu. Certainly, Eddystone will bring in new IoT use cases.

Beacons Empowered

With this drastic rise in beacon technology, companies are investing in this technology to generate greater revenues. Here I have given a brief detail about how retail industry is utilizing and availing benefits of beacon technology.

Smart Retail

The rapidly growing e-Commerce industry has resulted in the decrease of footfall and in-store sales for both the small-time retailers and big brands. In-stores have understood that they have to mimic e-commerce in the areas of personalized offers and shopping experiences.

Thus, Retail is a critical area where beacons are expected to bring huge impact – from proximity marketing to contactless payments to in-store analytics. 85% of the retail industry is expected to leverage beacon technology by the end of the year 2016. Beacons may seem like hype today, but let’s have a glance at few of its revolutionary aspects.

Beacons send location awareness alerts, updates on merchandise/products, and promotional notifications to tempt a passerby to enter the store. It can also be used to analyse customers who walk past the store and their visit duration. This analysis will help in making strategic decisions on product display.

Beacon technology

Image Source: http://www.openxcell.com/wp-content/uploads/2014/05/estimote.jpg

Beacons use in-store navigation and provide real-world analytics like:

  • The areas and items a customer likes to explore
  • Where a customer spent most of her/his time
  • What and when s/he makes a purchase
  • Most in-store rushed locations
  • In-store deserted locations
  • Busiest days of the store
  • Number of people who walk into their store per day

These data provides insight into customer behavior and store performance. This analysis will help the retailers to organize their products, prices and place their products in strategic locations on strategic days and time. By knowing the repeated visitors to the store, retailers can reward those customers with loyalty benefits for their purchase.

Beacon makes a customer’s in-store journey personalized and unique. It fetches data from the wish list of a customer and notifies him when he comes across that particular product. It also recommends products based on price, quality, and offers to provide better in-store experience.

The customers who have already set up their payment information through their smartphone can use a connected beacon and complete their purchase by processing their payment (a.k.a contactless payment) without waiting in long queues. When a payment is done, the stock will be automatically updated.

Beacon technology is spreading gradually in not just the retail space; it is being adapted to various other sectors like Hotels, Airline industry, Football Leagues, B2B arena and more. Beacons help businesses to attract more customers and to understand the demands of their potential customers. It is a cost-effective and targeted marketing technique that promotes your sales and generates higher revenues.

Our team of seasoned app developers would love to pitch all your app development needs and deliver you a beacon-friendly app.


How to generate random, realistic & reliable data for your application?

The most crucial part of the application development process is generating larger amounts of test data that resembles production environment. Because, in production, it may end up messier when a number of users knocking the app and filling the database with data. Therefore, it is challenging and needs extensive knowledge to overcome the issues with random data generation. However, advanced tools like Mockaroo will help solve the data generation problems efficiently.

Find out briefly what the challenges in the development process are and how to combat these challenges using Mockaroo.

Challenges in Application Development, Testing and Actual Deployment: 

  • Quick generation of abundant, reliable and realistic data
  • Lot of manual efforts for test engineers in populating test data and avoiding repetitive test data
  • Requires support for multiple data types (mail address, street address, Bitcoin Address, Blank, Null, country, currency, date, sequence, GUID, various versions of name including European and Chinese, lat / long, etc.) to load test data
  • Generating realistic data in multiple formats (CSV, JSON, SQL and Excel Formats, etc.)
  • Needs to load realistic data promptly without any programming skills



Mockaroo is the best tool that fits profoundly to address these challenges. It is a realistic test data generator that lets you generate up to 1000 rows in SQL, CSV, JSON and Excel formats. To extent this data limit, one can choose from their range of pricing plans.

It supports 74 different data types, where each type provides relevant sample data that is used to populate the field.

data types

Testing realistic data has two distinct advantages:

  1. It mimics the production environment and allows you identify the challenges you may face in real-time and thus helping to make it more robust.
  2. While demonstrating the app features to other users, realistic data makes it easier and quicker to understand.

Using Mockaroo will help you allay from technical aspects of testing such as learning mock data libraries, performing stress testing, etc. You can focus more on the application development, and leave the rest to this unique tool. It allows downloading higher amounts of randomly generated test data based on your specifications, and let you upload the data into the test environment using CSV or SQL formats with no programming.

How it Works:

Step 1: Go to Mocakroo official website

Step 2: Open your Table Schema

table schema


Step 3: Enter Field Name and Type similar to your Table Schema.

field name


Step 4: Enter value for Rows i.e. how many records you want to generate. Select output formats like JSON, SQL, Excel or CSV. You can either download or preview your data. Data preview will be displayed like this

code preview


Step 5 (Optional): It also provides a REST API GET method through which you can download your data programmatically.


Mockaroo is an outstanding tool in the application development process for those who want quick and efficient random data generation.

6 Mobile App Development Trends that are expected to Rule in 2016

Mobile apps have become increasingly prevalent in our lives as well as our workplaces. The customer’s demand for more mobile app experience has also significantly increasing year by year. However, businesses are also trying to exceed the customer expectations by innovating and advancing the technologies every year.

Check out these 6 mobile app development trends that are expected to rule in 2016.

Mobile app trends

IoT takes Center Stage

Many things were already connected using IoT technology. IoT is rapidly advancing and reinforcing businesses to introduce more and more smart devices and wearables continually in the market. It has extended to a great extent enabling almost every device to connect to another device. In 2016, we can expect that IoT will grow to its full potential and take center stage.

Augmented Reality based Services

AR technology is continually progressing since the last couple of years and is expected to turn into the next big technology trend. It is being implemented by various organizations to create an immersive user experience and engagement. In the coming year, we will see more augmented reality integrated mobile app launches that deliver amazing experiences for customers and larger trades for businesses.

Virtual Reality

VR is likely to hit the market enormously this year. Initially, VR was majorly limited to gaming, with the devices like Oculus Rift, Samsung Gear, and Google Cardboard. However, in the future, VR is expected to extend beyond gaming. If VR takes off, it will transform every business, from entertainment to commerce.

More Beacon and Location-based Services

Beacon and location-based services have already gained awareness among several businesses. These services will introduce a new world of possibilities for many prospects that help to interact with potential customers. Lately, the retail businesses have adopted this technology vastly. They use it to deliver the right information at the right place as customers walk by. We can expect that this trend will continue to gain force this year and will spread largely into various businesses.

Artificial Intelligence

Last year, AI went mainstream. Major companies like Google, Amazon, Facebook, and Twitter made massive investments in AI. And few of those largely invested companies also open-sourced their tools. Hence, in 2016, AI may transform the market by companies discovering new ways to apply AI.

Cloud-based App Development

Cloud technology integrated mobile apps will help to address the growing use of various smart devices and wearable technology. They provide the ability to sync all of our devices and apps across multiple devices. In 2016, this booming cloud-based app development technology will grow further and is expected to play a significant role in the mobile app development.

So, it is predicted that these trends are going to make remarkable marks in the mobile app development landscape in 2016. Adopting these advanced technologies to your business will undoubtedly bring exponential growth and results in an ultimate user experience. Vmoksha team is all excited and prepared to develop some innovative mobile apps in 2016.

Enterprises Need to Focus More on Unit Testing, Says Ram Lakshmanan

Ram Lakshmanan, CEO of Tier1App LLC, San Francisco, says that enterprises need to focus more on unit tests and then end-to-end tests.

Recently, Ram visited Vmoksha Technologies and honored us with a valuable presentation on Software Testing re-invented. Every single day, millions and millions of people in North America – bank, travel and do commerce using the applications that Ram has developed. He has architected mission critical applications for a major financial organization whose products/services are used by 1 in every three households in the USA. He has built B2B travel application that processes more than 70% of North America’s (USA & Canada) leisure travel transactions.

Through his vast experience, Ram found Testing as the most important part of the software development process. He decided to find how testing can be done more efficiently, and then he came across the Platform X, a cross-platform for testing.

According to Ram, the common testing anti-pattern is:

Ø  A large volume of manual testing
Ø  A high number of automated end-to-end tests
Ø  A smaller number of service integration tests
Ø  And a narrow foundation of unit tests

However, Ram listed out few drawbacks of manual and end-to-end testing:

  • Takes long time to complete
  • Finding the root cause for a failing end-to-end test is painful
  • Partner and lab failures ruin the test results
  • Multiple bugs hid behind one failure
  • End-to-end tests were flaky at times
  • Developers had to wait till the next day to know if a fix worked or not

Ram’s main idea is that enterprises need to concentrate more on unit tests and integration tests and then manual and automated end-to-end tests. However, this is possible through Platform X, a cross-platform technology for testing, which gives accurate and deterministic test results.

He also mentioned that a typical architecture for testing an enterprise application should includes,

Code Coverage Tool – This ensures that your tests are actually testing what percentage of the application.

White-box Unit Testing – For testing one single code in several dimensions

Ram once again says that “We are not against end-to-end testing. We want to do end-to-end testing, but with a limited focus.” He also says that in large-scale enterprises, testing mobile applications require a long time and efforts. In such cases, Appium, a cutting-edge open-source framework testing solution for mobile devices, helps to analyze user behavior promptly. Similarly, for browser-based end-to-end testing, Selenium tool is helpful, and SOAP UI is usable for testing web services. To overcome the challenges such as one-time data, non-consistent data, and availability constraints, virtualizing all the backend-calls is preferred to achieve more accuracy. Tools like Sonar Qube will help to analyze source code and report bugs in the code. Integrating all these test tools in Platform X will help to find a bug right at the time of committing code.

Finally, he wrapped up the presentation saying that a qualified Test Software Engineer can give the best quality release.

As Vmoksha is planning to adopt Automated Testing soon, Mr. Ram’s presentation was helpful and encourages us to proceed further.

Mr. Ram while giving his presentation

Software Testing

Ram with Vmokshaites after the presentation

Software Testing