Auditing Data Changes in SQL Server using Service Broker’s External Activator

Auditing Data Changes

Auditing of changes to data is an integral part of any Enterprise system’s database. Not only is this information crucial for tracking ‘who-did-what’, but at times, it is also a legal requirement.

In this article, I will attempt to demonstrate the use of Service Broker and it’s External Activator Service for auditing data changes. Before getting to the code that can be used to view this in action, I will validate the Service’s use by looking at other available methodologies. All code used here can be downloaded from a link provided at the end of the article.

So, lets first look at some techniques available in SQL Server 2008 for capturing information when data is modified.

1. SQL Server Audit -

Using database-level audit actions in SQL Server Audit, we can track which user is reading (running select statements) or performing DML actions (insert, update & delete) on chosen tables in the database. While this information can be extremely useful for a DBA, it is not very detailed from a data auditing perspective, i.e. it doesn’t give us ‘before-after’ snapshots of the modified data.

2. Change Tracking -

If we are interested in finding out which records in a table were added, deleted or modified since the last time we checked, Change Tracking is a wonderful mechanism for satisfying our requirements. However, like SQL Server Audit, this only tells us which records were modified, and not the details of what the modifications were.

Change Tracking can be extremely useful in relational databases which act as sources for data warehouses. Since ETL (Extract, transform & load) jobs require information on records to propagate to a warehouse, this method can be used to determine which records to read from the chosen tables.

3. Change Data Capture (CDC) -

While Change Tracking just tells us what was changed, Change Data Capture goes a step further and gives us the details of the data that was modified. However, it is not customizable to a great degree, and therefore, if we wanted to make use of it for our data auditing requirements, and hence we needed to track the user who made the change, we will have to create a field in the audited table for capturing the user’s details (either user name or user id, etc.) and ensure that this field is updated with every insert & update operation so that this info is propagated to the CDC tables. Hence, a delete operation cannot hold the user’s info in the CDC tables, unless we update the user’s info first and then delete the record.

Because of this, CDC falls short of a full blown auditing solution. One could argue that CDC was never built with data auditing in mind, and therefore it is not really a shortcoming of the methodology. CDC, like Change Tracking, is extremely powerful for data warehousing, as it frees us up from having to write any ‘infrastructure’ code to capture changes to data in a relational database.

Where to then?

So, how do we go about setting up the architecture to manage our auditing requirements? Can we have a system in place which will be highly customizable, as well as low on server load?

Some would say the answer is to make use of triggers which look at the inserted and deleted tables and then capture the required audit info in audit tables. This approach is certainly well suited for customization, as the logic for capturing the audit information is completely in the developer’s control.

For example, the user performing the operation can be easily captured with a simple function such as SUSER_NAME (or ORIGINAL_LOGIN in case the context has been switched). Moreover, with this approach, we can choose to audit information only if key fields in the table were audited by making use of the UPDATE(FieldName) function within the trigger’s body. This logic can be extended by using the COLUMNS_UPDATED function, which returns a bit mask representing the actual columns that were updated in the operation.

Having said that, a drawback to this approach is that all operations within the trigger’s body extend the duration of an end user’s transaction. I.e. When an end user using an application (which hits a table in a database) inserts, updates or deletes a record, the trigger’s operations become part of the user’s transaction. This is not very good from a performance point of view, as we do not want end users to be affected by auditing requirements.

Service Broker

This is where SQL Server’s Service Broker comes to the rescue. Service Broker gives us the ability to make these trigger operations asynchronous. I.e. By converting the information contained within the inserted and deleted tables to XML chunks and passing it to a Service Broker queue, a trigger can return control back to the user’s transaction, while Service Broker can take care of performing the operations required for auditing as part of a separate asynchronous thread.

It is important to note that even though it is asynchronous, Service Broker is a ‘reliable’ messaging system. I.e. the above mentioned auditing operation performed by Service Broker will only commit when the user transaction which initiated it commits. Service Broker also scales out extremely well, so we can have multiple instances of ‘listeners’ which will be activated by Service Broker based on its current load (Details about how to make use of Service Broker will be covered in the sections below).

1. Components

The main Service Broker components comprise of message types, contracts, queues and services.

A typical Service Broker ‘conversation’ is quite easy to understand. When such a ‘conversation’ is initiated between 2 services, a message is passed from the initiator service to the recipient service. This message resides in a queue, which belongs to the recipient service. The message is of a particular message type, which includes validation such as WELL_FORMED_XML. The conversation is governed by a contract of message types between the sender and recipient services.

Once Service Broker has processed the message in the queue via Activation (covered in the next section), the recipient service sends an acknowledgement back to the initiator service, and the conversation is completed.

2. Activation

Service Broker supports 2 forms of activation – Internal and External.

Internal Activation occurs when Service Broker makes use of stored procedures to process messages in its queues. External Activation occurs when Service Broker raises events which are monitored by external applications (outside the database). These events notify the application that messages have arrived in the monitored queue, and therefore processing can begin.

For Internal Activation, stored procedures can be written and assigned to queues as their readers. A value can be set on the queue which indicates the maximum number of readers that can be used in parallel when the load on the queue increases, i.e. when there is a large number of messages to be processed. This architecture is well suited towards scaling out.

However, with Internal Activation, all of the processing power needs to be drawn from the database’s server. If the messages passed to the queues are in XML format, performing complex manipulations on XML using SQL can be CPU intensive. For OLTP systems which cater to a large number of end user sessions and transactions, adding additional load to the production system’s database server is most certainly undesirable.

This leaves us with External Activation. External Activation can take the load of processing Service Broker messages away from the database server, as the applications which respond to Service Broker events can sit on an application server, separate from the database server. Moreover, when using XML messages, there is no question that an application written in C# or VB.Net can perform complex manipulations a lot faster than SQL. Also, multiple instances of the application can respond to events being raised from Service Broker, thereby allowing for a scale out solution (This will be demonstrated in the sample code in following sections).

Findings

So, we have looked at the mechanisms available in SQL Server for capturing data changes. We’ve also worked through the use of triggers with and without the use of Service Broker, and understood the pros and cons in the different types of Service Broker Activations.

From this, I hope you would have inferred that making use of triggers which communicate with Service Broker, which in turn makes use of External Activation is a powerful strategy for auditing data changes.

The next section will demonstrate this mechanism in action, along with details on how to work with the External Activator Service.

Ok, enough theory! Let’s cut some code!

In this section, I will attempt to demonstrate the use of Service Broker’s External Activator Service with some basic sample code. This code can be extended to suit individual auditing requirements. The scripts for the code can be downloaded from the attachment at the end of the article.

1. Databases

Using the above mentioned components, it becomes easy to split a single OLTP database which contains all tables (including tables for storing the audit information) into separate databases, such as one for the OLTP database and another for the audit info. Most OLTP systems generate a lot of data every day, and if a large part of such a system is audited, then the database can quickly grow in size.

Using External Activation allows for the audit database to be moved to a separate instance of SQL Server, ideally on a separate box, which is great for a busy OLTP database as the load involved in writing data back to the auditing tables is taken away from the production box.

So, with this in mind, let’s create 2 databases. As mentioned above, the auditing database should ideally be on a separate instance of SQL Server on a box other than the production SQL Server’s box. In the sample code, both databases are created on the same instance.

2. Tables

 To keep things simple, we’ll create a simple table in the OLTP database which will get audited for any data change (insert, update or delete). The audit information will get stored in the Audit database. We’ll create 2 tables (parent-child) which will store the audit information.

The design of the audit tables shown here is just a template, for you to use as a starting point in your design. It contains tables with some key fields that are usually needed in auditing. These include the time of the change, the user who made the change, the type of change, the fields in the table that were changed and the value of the field before and after the change. For large OLTP systems, most transaction tables will usually generate huge amounts of audit data, so it might be worthwhile having 2 tables per audited table instead of 2 generic audit tables, for quicker read access. This however incurs an overhead whereby new structures have to be created each time a new table is audited, so if this cost is undesirable, then the audit table structure(s) should probably be created in a generic way whereby all audit data is always stored in the same table(s).

In the sample code, 2 separate tables are created for the table being audited.

3. Service Broker

As mentioned in previous sections, Service Broker architecture makes use of a conversation between 2 services. The script below shows the objects that need to be created to get such a conversation going. If required, the destination service can be in a database other than the initiator service, or even in a separate instance of SQL Server by making use of Routes. In this article, all Service Broker objects are created in the database where the audit data originates.

The Service Broker conversation is initiated by the AUDIT_INITIATOR Service. The audit information, which is data from the inserted and deleted tables available from a trigger under the table being audited, is sent as an XML message from the AUDIT_INITIATOR Service to the AUDIT_TARGET Service. This message also contains additional information such as name of the user who performed the operation, the time of the operation, etc.

The code shown below creates all required database objects (the code can be downloaded from the attachment at the end of the article). After executing the script, once you add, edit or delete a record in the INVOICES table, a message should be created in the AUDIT_MESSAGES queue.

4. External Activator Service and Auditing Application

Now that we have all the database components in place, we need to create the application which will perform the auditing operation and save the audit information back to the AUDIT_DB database. Instances of this application will be spawned by the External Activator Service.

The External Activator Service is a freely available download from Microsoft (While the download says it is a SQL 2008 component, the service can also be used for a SQL 2005 database). The service is a windows service that should ideally be installed on a separate application terminal (i.e. not the database server). The service will be configured to point to the OLTP_DB database (configuration file is shown below). When there are messages in the AUDIT_MESSAGES queue, the service will spawn an instance of the Auditing application that will be written to process the XML messages.

The service ‘knows’ that there are messages to be processed because Service Broker raises a Queue Activation event (AUDIT_EVENT) when messages arrive. When this happens, a record is added to the EXTERNAL_ACTIVATION queue. The service keeps polling this queue, and when messages arrive, instances of the Auditing application are spawned.

The service can be configured to spawn multiple instances of the Auditing application, to cope with heavy load. The recommended maximum number of instances is the number of CPU cores on the terminal.

The first step is to download and install the External Activator Service. The download links are as follows -

X86 Package – http://go.microsoft.com/fwlink/?LinkId=130678&clcid=0x409

X64 Package – http://go.microsoft.com/fwlink/?LinkId=130679&clcid=0x409

IA64 Package – http://go.microsoft.com/fwlink/?LinkId=130680&clcid=0x409

If you have followed the default installation options, the service’s binaries, config and log files would have been installed in “C:\Program Files\Service Broker\External Activator” or “C:\Program Files (x86)\Service Broker\External Activator” depending on your machine’s platform. An explanatory document called SSBEA.doc is also provided as part of this intallation (under “Bin\en” folder), which provides detailed information about the component. The configuration file that needs to be set up is called EAService.config (under Config folder).

The configuration file with all the required settings is shown below (The code for this file can be downloaded from the attachment at the end of the article). Remember that the server name, database name, etc. are case sensitive, so be sure you enter them in the right case. If you are unsure of the case of the SQL Server’s name, run @@SERVERNAME on any database to get the value in the correct case. The server name needs to be changed in the highlighted sections.

Also make sure the account under which the External Activator service is running has adequate permissions to access and work with the OLTP_DB, since the sample application makes use of Integrated Security.

All that remains after this is to create the Auditing application. For this article, I’ve used the sample code provided by Microsoft’s Service Broker team from http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-components-postattachments/00-09-97-64-26/Program_2E00_cs and modified it to process the XML audit messages (The code has been translated to VB.Net). The modified code is available as part of the download at the end of this article.

Once the code has been compiled as a console application (called AuditApp.exe) and placed in “C:\audit” folder, the External Activator service can be started (from Windows Services). If all settings are correct, the service will spawn an instance of the Auditing application when there are auditing messages to be processed. The application will pull the audit messages from the AUDIT_MESSAGES queue, process the XML and create records in the AUDIT_DB database’s INVOICES_AUDIT and INVOICES_AUDIT_DETAILS tables.

The snapshots shown below demonstrate the auditing for an Insert, Update and Delete operation on the same invoice. The results are cumulative, so the final snapshot has the results of all 3 operations.

Conclusion

In this article, we first went through the features available in SQL Server 2008 for auditing changes to data. We then verified the pros and cons with each of them, and then reviewed Service Broker’s capabilities. We then took a look at the 2 types of Activation used by Service Broker, and then talked about the External Activator service and the advantages in its use in auditing. Finally, we built some sample code and visualized the entire process.

I hope this article gives you enough information before you start building you own custom auditing solution. The purpose behind writing this article was to provide a starting point, from which you can tailor your own strategy as per your business requirements. If you have any questions or comments/recommendations, please send me an email or alternatively leave a comment at the end of this post.

Code

The code used in this article can be downloaded from here

About these ads

About Ajit Ananthram
Ajit Ananthram is an MCSA (SQL Server 2008), MCITP (SQL Server 2008), MCTS (SQL Server 2008 Business Intelligence) and an Oracle Certified Associate. He has a Masters degree in IT from the University of Technology, Sydney. He has over 8 years of experience working with SQL Server from version 2000 to 2012. He currently works as a Database Administrator at Kennards Hire in Sydney, Australia.

14 Responses to Auditing Data Changes in SQL Server using Service Broker’s External Activator

  1. Swami says:

    Hi
    Thanks for the very detailed writeup. We were able to simulate the entire flow, w/o much difficulty. I have a couple of questions on this overall solution:
    1. We see that the External Activator Service polls periodically (in fact once every 3 seconds). Is this configurable?
    2. The EA windows service invokes an .exe when it sees a msg in the EA Q. Does EA support invocation of .NET DLLs ?

    • 1. The polling interval is not configurable. The only configurable settings are those available in the External Activator’s configuration file.
      2. The External Activator service has been built to start a process. Since a Dll by itself is not a standalone executable, this cannot be invoked. If you have to call a specific Dll, I’d say build an EXE which calls your Dll, and use this as the target of the EA service.

  2. Swami says:

    Thanks for your response.

  3. Kenny Kee says:

    Hi, I have no problem to start the Service Broker External Activator (windows service) on Machine A which polling the data from Machine B’s OLTP_DB database to AUDIT_DB database which both on the same instance DB.

    Windows service is running without any error messages and without polling the data after I moved the AUDIT_DB to another database server. Any idea what is missing?

    The AuditApp.exe has no problem to poll data. I guess something could be wrong to the windows service or permission?

    • The first place to start looking for the cause is the External Activator service’s log file. Ensure the service’s config file has the Verbose trace flag set (as shown in my article), restart the service if the flag wasn’t set previously and then review the messages in the log file. If you’ve used the default installation path for the service, the log file would be present at “C:\Program Files\Service Broker\External Activator\Log” and it is called EATrace.log. If you are unable to resolve the errors, send me the log file. My email address is ajitan14@gmail.com.

  4. Very slick! Makes me wish for a SQL Server under enough auditing load to qualify for a service broker.

  5. Midhun says:

    Nice Article! But is there a way to configure multiple service broker enabled databases in the EAConfig.config file?

    • One External Activator instance can service multiple Application queues. So, if you have multiple queues in different databases that need to be serviced, you can configure a single External Activation queue in one database and have all Application queues feeding Queue Activation messages into the single External Activation queue which is monitored by the External Activation service.

  6. Jay says:

    Hello

    We are implementing a similar solution. Based on other things I have read, I have been trying different strategies for reusing a conversation for some time rather than creating a new conversation on every trigger fire. In your experience, is it worth the effort, or is the overhead incurred by creating a new conversation on every trigger fire inconsequential?

    I’m also concerned about message ordering. I know it’s guaranteed per conversation, but will conversations be processed in the order committed?

    Thank you for the great article.

    • Hey Jay, thanks for your feedback.

      To answer your first question, getting the trigger to reuse an open conversation would be more efficient if your database deals with a very large number of transactions. For most systems including some busy ones, creating a new conversation each time is not a major hassle, as long as you remember to end each conversation correctly.

      As for the second question, yes, message ordering is guaranteed for a given conversation. For a solution such as an auditing application, ideally, you would want multiple readers reading the messages from the queue and processing them in parallel, and therefore the order in which messages get processed should not matter. Since the XML messages contain the date-time stamp, this can be passed back and stored in the tables, which can then give you the order in which the actual (in this case audited) actions took place.

  7. Hi Ajit,

    The article is very clear in terms of auditing reuirements. I see that audit is done paer table.Is there a way to process the audit for a Transaction that affect multiple tables. Is there a way to get the entire transaction in the Audit XML something like log mining which can then be processed and synchronized to another OLTP.

    Does the external activator Windows service require me to write separate handlers to handle audit events of each table or is there a generic way to hand it.

    Please share your thoughts on Linked in’s Databus CDC pipeline.

    Thanks,
    Ajeeth

    • Thanks for your feedback.

      To answer your first question, the pattern I’ve used is built to audit on a table level, so if you wanted to audit on a transaction level, you might need to pass a transaction-specific attribute in each XML packet and then process the messages belonging to one transaction in a batch using the External Activator service.

      For the 2nd question, the External Activator service simply responds to messages coming into the processing queue. It does not differentiate between messages belonging to different tables.

  8. Excellent article.
    I’m seeing messages in the Queue in database and when I manually run the AuditApp.exe it correctly proccesses queue and saves to Audit table in new database. However The External Activator does not seem to start the AuditApp.exe even though I have configuration correct and and the ExternalActivator Service is running also. I’ve included output of EATrace.log.
    Would really appreciate ant thoughts.

    Kieran

    02/07/2014 15:13:15 ====== ================================================================================
    02/07/2014 15:13:15 ====== ================================================================================
    02/07/2014 15:13:15 INFO The External Activator service is starting.
    02/07/2014 15:13:15 INFO Initializing configuration manager …
    02/07/2014 15:13:15 INFO Reloading configuration file C:\Program Files\Service Broker\External Activator\config\EAService.config …
    02/07/2014 15:13:15 INFO Reloading configuration file completed
    02/07/2014 15:13:15 VERBOSE Running recovery using recovery log file C:\Program Files\Service Broker\External Activator\log\EARecovery.rlog …
    02/07/2014 15:13:15 VERBOSE Checkpointing recovery log C:\Program Files\Service Broker\External Activator\log\EARecovery.rlog …
    02/07/2014 15:13:15 VERBOSE Checkpointing recovery log completed
    02/07/2014 15:13:15 VERBOSE Running recovery completed
    02/07/2014 15:13:15 INFO Initializing configuration manager completed
    02/07/2014 15:13:15 VERBOSE Starting worker threads…
    02/07/2014 15:13:15 VERBOSE Worker threads are successfully started.
    02/07/2014 15:13:15 INFO The External Activator service is running.
    02/07/2014 15:13:15 VERBOSE CM-NS-Thread is starting…
    02/07/2014 15:13:15 VERBOSE Heartbeat-Thread is starting…

    • Looks like the event isn’t being raised in the database for messages arriving in the queue. Try recreating the event (AUDIT_EVENT) and see if this starts the external activation process.

      A good test for this is to also check that messages are being placed in the EXTERNAL_ACTIVATION queue when the External Activator service is switched off, and that these messages are pulled off the queue when the service is turned back on.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: