Friday, May 19, 2006

Wide Area Files Services

If you are a large enterprise customer with many sites or with many data centres sending large amounts of data over the WAN, you need to learn about WAFS. WAFS or Wide Area Files Services is a relatively new technology for an old concept. The goal of WAFS is to make file services and links across the wide area more efficient.

Networking professionals are not normally involved in decisions regarding file servers and data caching, but this case is different. These devices sit at the edge of the WAN links and, at some point, will require networking professionals to be involved in either implementation or trouble shooting. This alone should put WAFS on the networking radar.

Different vendors have different approaches for WAFS, but the goals are the same – to move data across links from 10x to 100x (or more) more efficiently than it is moving today. Many new possibilities for services and applications are created with this kind of WAN improvement.

So how does this work?

In the past, data compression was used to help WAN links carry more data. This only saves a limited amount of bandwidth. WAFS uses data compression with other techniques to improve efficiency. The most common techniques are stream/data compression, data caching, protocol optimization and application optimization.

Today’s data compression techniques are similar to the past – redundant patterns of data are recognized and replaced with shorter tags as place holders for the full stream. On the other end, you recognize the tags and re-insert the full stream. So if you see a stream of 100 “0”s – you send something meaning 100 x “0”.

The next technique is data caching. Network data moving between sites, in many cases, is repetitive. A document is opened, basic changes are made and then the document is saved. Most of the document is identical except for the changes. Enter WAFS data caching. The first time the document is sent across the link, the WAFS system analyses it for patterns that can be stored. The WAFS cache creates a tag for this data and stores it on both sides of the connection. The next time the same data is to be sent across, the tag is sent instead. The data can be in the same file or some other file (e.g. the same document attached to an e-mail). At the remote end the tag is recognized and replaced with the real data. This technique saves significant amounts of bandwidth when the same data is sent back and forth.

Protocol optimization works at the network and transport layers of the OSI model by recognizing chatty protocols that can have their overhead reduced. The WAFS system learns what communication must take place and what can be spoofed by looking at different parts of a network conversation. Spoofing is the process of doing local acknowledgements in a protocol, rather than sending the request all the way to the end-device for an acknowledgement. There are a number of opportunities in today’s protocols to make improvements. The WAFS vendors have made their systems pretty smart and indicate that there is no tuning required. But, if you are going to tinker with communications protocols, you should understand what is going on under the hood.

Application optimization is the same idea as protocol optimization. Take specific applications that endlessly communicate back and forth, and reduce their traffic on the link through spoofing and other techniques. By only sending smaller tags representing the communications, you free up bandwidth. This is especially true in applications such as NFS or CIFS, which allow you to browse network drives and send essentially the same data repeatedly. Again, you may not need to tune anything here, but you should understand what is going on.

This is a new market, but the vendors are lining up to get in. The two big networking vendors, Cisco and Juniper have each bought start-ups. Actona and FineGround went to Cisco, while Peribit went to Juniper. However, they are not the only game in town. There are any number of other companies in this field, with impressive performance numbers and features. Some of the more notable companies are F5, Riverbed, Tacit (recently bought by Packeteer), Expand Networks in partnership with DiskSites, and Availl.

The question is not whether this technology works, but is it useful for you. The vendors have all shown significant improvements in performance in model networks. Network traffic profiles, while similar from company to company, are relatively unique. They are based on how your business and IT organization has structured the data and where it resides. With centralized data, many branches and high latency between sites, WAFS may be a great solution. In the case of decentralized data, small transactions between sites, and no strain on the WAN, spend your money elsewhere. The only way to tell for sure is with a pilot. Make sure to try different vendors as well, since each vendor uses slightly different techniques, in different combinations.

Device failure is a concern. If a device fails on a link, the link should continue to function. Failures can be hard, such as a power supply failure, or soft, such as an OS or software bug or crash. While hard failures do happen, the soft failures are more typical. Make sure there is a good answer for both.

While this is not an exciting technology area, it is definitely a useful one. WAFS is a great technology that solves some very real problems for customers that fit the WAFS profile. You should evaluate different vendor offerings to determine the actual amount of benefit, and make sure that your links are protected in case of failure. As this area becomes hotter in the next 12 to 18 months, expect to see some of the smaller vendors disappear or get acquired. If your company is having issues across the WAN and is looking at more bandwidth, this technology is definitely worth a look.

Thursday, May 04, 2006

Bill 198 and Network Security

Most Canadian enterprises are familiar with the U.S.’s Sarbanes-Oxley Act, which sets new standards for corporate governance and financial reporting, but an equivalent Canadian bill is getting less attention. This doesn’t mean network managers can afford to ignore the Canadian bill though. In fact if network managers don’t ensure their security and IT governance practices meet regulations, their companies could find themselves in a lot of trouble.

Ontario Bill 198 passed into law December 2002, allowing the Ontario Securities Commission and the Canadian Securities Association to pass their own instruments (regulations) that would allow the imposition of penalties and jail time. Instrument OSC/CSA 52-109 (Certification of Disclosure in Companies' Annual and Interim Filings) was passed January 2004 and instrument OCS/CSA 52-111 (Reporting on Internal Control over Financial Reporting) passed February 2005. Instrument 52-109 is equivalent to the US Sarbanes-Oxley (SOX) Act’s section 302 and 52-111 is equivalent to SOX’s section 404 in the US.

Instrument 52-109 essentially says that companies must be truthful in their financial statements and put in place systems and processes to ensure this. The effective date for this was March 30th, 2005.

Instrument 52-111 requires that the CEO and CFO certify that they are responsible for having adequate internal controls, using a recognized framework for these, relying on “evidential matter”, that they attest to the effectiveness of their controls (including reporting weaknesses), and have external auditors reporting on all this. The effective date for this instrument is June 30th, 2007.

Both of these regulations are applicable to any publicly traded company in Canada, bringing Canadian laws in line with those of the US. From a technology perspective the significant portion of these two regulations is in 52-111, where the concepts of control, governance framework, and “evidential matter” (essentially auditable logs and data collected in a very specific way) are introduced.

The regulation calls for implementing adequate controls in a company by using an accepted IT governance framework. There are three potential frameworks that can meet the level of IT control called for – COSO/COBIT, ITIL (ISO20000) and ISO 17799. ITIL and ISO 17799 are fairly international in their scope and flavour, while COBIT has been developed in the US and is applicable in Canada.

Here is some background on these frameworks.

ITIL® (Information Technology and Infrastructure Library) is closely related to ISO 20000. It was developed by the British government in the mid 1990s to address increased business and government reliance on IT systems. ISO 17799 is also based on a British standard (7799-1), but is aimed at information security specifically, rather than as a generic governance model. As such ISO 17799 is aimed and designed towards protecting the infrastructure from misdeeds rather than governing it. COBIT ® is the Control Objectives for Information and related Technology as developed by the IT Governance Institute and ISACA (Information Systems Audit and Controls Association). Both ITIL and ISO 17799 are older than COBIT, but are just as relevant to these regulations. All are aimed at implementing best practices around governance and security of IT infrastructure.

One thing common to each of the frameworks is their structured approach to the implementation and management of IT systems like the network, along with the idea of due diligence and due care. This means that an organization must be able to show that it has not only taken care to provide security around its data and network, but also that it has done so using a best practices model. The new regulations provide an impetus for security by putting in place penalties for failing to adequately protect IT infrastructure.

Changes to network security include understanding what asset is at risk, the value of the asset, what the risk is, and how to protect the asset, reducing the risk in a way that can be verified in an audit. Most companies think of an IT asset as the data on servers and workstations and not the network itself. While most value is in the data, the network does have a role to play.

Implementing good network security practices is part of all the frameworks. This means putting in access control systems, using encryption sensibly, and perhaps linking the network to back-end directory services in order to keep user lists current. In addition to this, many companies would benefit from implementing a good Public Key Infrastructure certificate system, and then combining that with directory services and network access.

Companies also need to put in place processes that regularly review their network. Areas under review should include the number, type and identity of all devices attached to the network. IT departments should regularly review active access control lists (ACLs) on all routers and switches, and check for stale or unknown entries. ACLs should be coordinated between the same types of devices (say all the routers) and different types of devices (say between routers, switches, firewalls, and directory services). Sufficient control must be put in place consisting of strong authentication and tightly controlled authorization for any access to the organization from the Internet to ensure that risks to assets are minimized.

How much is done depends on what value a company places on their assets and what risk they are comfortable with. The new regulations have increased that asset value by punishing companies and individuals who cannot demonstrate how they have protected the data and that they are complying with the regulations.

In the past many companies ignored good security and IT governance practices, particularly when it came to the network. These companies felt that unless the public discovered a problem, they could get by doing the minimum necessary to keep systems functioning. With the passages of these new laws and regulations, public companies will now need to demonstrate to external auditors that they have taken steps to protect their valuable information in ways that can be verified. In addition, companies will now be forced to disclose the weaknesses in their systems, and presumably rectify any problems identified. Companies must address any holes in network security and governance now or face the consequences when the legislation becomes enforceable.

Thursday, March 23, 2006

Welcome to Canadian Networking

Welcome to my BLOG on Canadian Networking.

This BLOG is about Telecom Networking (LAN, MAN, WLAN, Telecom Carriers) issues in Canada. I will try to post to this at least once per week, on relevant issues in networking. I will also entertain discussions on the articles that I write for Network World Canada. The column is called Down to the Wire" and covers essentially the same topic. I will post links to the columns in Network World on the left. The BLOG allows us to discuss the topics further - if anyone is interested.

My column in the latest issue of Network World talks about the need for network based authentication in today's networks.

In today's business environment, the need for network authentication, especially linked to some sort of back end authentication system - e.g. based on RADIUS - is pretty important. Unfortunately most companies ignore this technology because it is too hard to implement. Or at least it used to be.

Times have changed.

While network authentication is still not a walk in the park, many vendors and consortia have stepped up to the plate. Cisco's NAC and the TCG's Trusted Network Connect Sub Group (TNC SG) are both examples of efforts in this area. Add to this consulting groups like Blue Spruce Technologies, hardware solutions from LockDown Networks and software solutions from Great Bay Software and things get easier.

It is time to implement network authentication in enterprise networks in Canada.