In the past, chatbots mainly provided canned answers to simple questions, but the increased sophistication of artificial intelligence (AI) has allowed chatbots to fulfill a useful role within enterprise customer support departments. However, as chatbots become more sophisticated and useful, they collect more valuable and sensitive data, making chatbot security an important priority.
Enterprise chatbots are designed to fulfill a customer service role, and a crucial part of this job is data collection.
Like a human customer service representative, a chatbot is often going to need to collect personal data in order to help with an issue or complaint. At a minimum, they will request a name and some other piece of identifying information (account number, email address, phone number, etc.) in order to locate the relevant account on their system.
The security issues with chatbots arise after this data is collected. The data needs to be transmitted from the client’s browser to the enterprise’s server. If the data is not secured in transit, there is the potential for a hacker to eavesdrop on the transmissions, or failure to properly secure and discard data-at-rest could mean that an organization is leaving a large repository of customer personal information on an untrusted server.
If the chatbot is running on the enterprise website, then securing communication channels and data at rest is fairly straightforward. Only offering chatbot functionality on HTTPS webpages protects the data in transit and a good organizational data management process protects the data at rest as well.
However, with the increased usage of social media platforms as an extension of organizations’ marketing and customer service departments, many organizations also operate chatbot software on these platforms as well. This means that the ability to secure sensitive customer data may not be completed within an organization’s control.
Recent incidents have also demonstrated that social media platforms are willing to sell data to third parties without user authorization, meaning that data collected by your organization could be sold to willing buyers.
The risk of chatbot-caused breaches is not a theoretical one. In recent years, several enterprises including Delta, Sears, and Ticketmaster had data breaches caused by a failure to secure chatbot systems.
Delta and Sears were breached in September 2017 due to a hack of a third-party chatbot provider that was discovered the following month. Both companies relied on a chatbot provided by 7.ai as part of their customer service process. The chatbot provider’s servers were infected with malware in September and leaked payment card details of customers from both companies. Sears estimated a loss of personal data of less than 100,000 customers, but Delta believes that the breach compromised the payment card information of hundreds of thousands of users.
The impact of these breaches is significant due to the type of data breached and the affected users. Since the breached data as payment card information, the PCI-DSS standard is relevant in all cases. The affected users of Ticketmaster were UK citizens, who are protected under GDPR as well.
Artificial intelligence in general and chatbots, in particular, are powerful tools for helping an organization operate at scale. Even simple chatbots can help with data collection, answering basic questions, and routing inquiries, decreasing the load on human customer service representatives, and improving service by decreasing wait times and allowing agents to spend more time per customer.
The volume and value of the data collected by chatbots make securing them a priority. The data collected, transmitted, and stored by chatbots needs to be protected at the level required by any applicable regulations (GDPR, HIPAA, PCI-DSS, etc.).
If you have a chatbot system or plan to implement one, reach out for a consultation to learn the best ways to ensure that your system is secure.