Skip to content

>AUUG–The Organisation for Unix, Linux and Open Source Professionals

Program Abstracts

Tutorials: 10 October 2006

All tutorials are 1/2 day unless listed otherwise.

Advanced PF Rulesets

by Ryan McBride (Full day)

Writing a ruleset for PF is trivial at first, but as the network and policies become more complex and packet rates increase, more advanced ruleset techniques become necessary.  Furthermore, as PF's capabilities increase, the number of different options becomes dizzying, even for a PF developper.

This tutorial will present a several PF ruleset paradigms that can be used to make your ruleset more scaleable both in performance and maintainability.  In addition to troubleshooting techniques, it will also cover some of the newer and more advanced PF capabilities, including DoS mitigation techniques, load balancing, and itegration with routing, bridging, carp, and ipsec.

Security and Usability

by Peter Gutmann

An important consideration when building an application is the usability of the security features that it will employ.  Security experts frequently lament that security has been bolted onto applications as an afterthought, however the security community has committed the exact same sin in reverse, placing usability considerations in second place behind security, if they were considered at all.  As a result, we spent the 1990s building and deploying security that wasn't really needed, and now that we.re experiencing widespread phishing attacks with viruses and worms running rampant and the security is actually needed, we're finding that no-one can use it.  This talk will look at security usability principles for applications, covering everything from the initial design stages through to final pre-release usability testing.

Security Risk Management

by Lawrie Brown

This tutorial will present an overview of security risk management, including the critical risk assessment process.  This aims to identify threats to, impacts on and vulnerabilities of information and information processing facilities and the likelihood of their occurrence, in order that these threats may be controlled and minimised at an acceptable cost.  Unfortunately, this process is often not managed well.  An overview of relevant international and national standards will be presented which provide guidance on this process.  The latter part of the tutorial will be a "simplified case-study", walking through an example risk assessment for a hypothetical (though based on actual) organisation, using the process outlined in the recently revised DSD ACSI 33.  This standard is mandated for commonwealth government use, but provides good guidance for any who need to undertake such a process.

Optimising MySQL Applications Using the Pluggable Storage Engine Architecture

by Arjen Lentz

In this tutorial, we will take an in-depth look at MySQL's "Pluggable Storage Engine" architecture.  Understanding the features and trade-offs in each engine allows developers to optimise their applications by making appropriate choices and tuning the MySQL server appropriately for their needs.

For example, logging of page clicks on a web site places completely different requirements on a database from say tracking customers and sales.  Functionally, either can be done using generic solutions.  But by utilising specific features available in specialised storage engines, extraordinary performance improvements can be attained.  This becomes particularly relevant when there are specific speed and scalability requirements for an application.  A general purpose storage system would simply not do.

In MySQL, the storage engine can be selected on a per-table basis.  This means that different engines can be used from within a single application, as appropriate for the application's needs.  In many cases, the application need not even be aware which engine is used.  In this tutorial, the different available storage engines will be compared.  Also, the fundamentals of adding new storage engines will be discussed.

MySQL Cluster Tutorial

by Stewart Smith

MySQL Cluster is a Highly Available clustered storage engine for MySQL.  In this tutorial we will cover the following:

There will be hands-on components.


Conference Keynotes: 11-13 October 2006

Google Maps -- Organizing the World's Information, Geographically, by Lars Rasmussen (Google)

In this talk I will discuss the many pieces of the puzzle comprising Google Maps.  We built, for example, the Google Maps site as a single page consisting almost entirely of JavaScript.  (The acronym "AJAX" was later coined to describe this approach.)  I will discuss the pros and cons of AJAX, and delve into some particular technical challenges we had to meet.

I will also give a high-level overview of the challenges involved in working with spatial data: making it searchable, routable, and browsable.  The field is relatively new to Google, and most of the challenges still lie ahead.

The Convergence of Internet Security Threats, by Peter Gutmann (University of Auckland, NZ)
Just as the Internet has subsumed all earlier networking technology (ARPAnet, ATM, BITnet, DECnet, Ethernet, ISDN, JANET, NSFNET, and many more), so an omnibus Internet security threat is gradually subsuming all earlier discrete threats (ID theft, phishing, script kiddies, spam, spyware, trojan horses, viruses).  Instead of being small-scale (if prolific) nuisances perpetrated mostly by script kiddies, these blended threats are increasingly being created by professional programmers and managed by international criminal organisations.  The Convergence of Internet Security Threats looks at the methods and technology behind this blended virus/trojan/spam/phishing/ID theft/credit-card fraud threat, various less-than-effective attempts to address it via legislation, technology, and press releases, and some suggestions for potentially effective legislation and other protective measures.
The Consumer is Dead - Long Live the Consumer!, by Arjen Lentz (Lentz Internet Services)

Many websites clearly indicate that the companies behind them don't "get it".  What don't they get?  Let me count the ways...

If you've ever read "The Cluetrain Manifesto", much of this may actually sound somewhat familiar.  However, I will discuss matters in an applied way, with live demonstrations where possible.

We'll look at various issues, among them:

  • site navigation.  Do we rely on a search box (or even worse, Google)
  • does the company engage (potential) customers in a meaningful conversation
  • what do you mean, "does this company have a business model?"

In short, this keynote is a funny/educational rant about bad websites and the misconceptions that cause them to be created, still. . And about silly business models.


Conference Papers: 11-13 October 2006

Technology commercialisation and the Internet, by Greg Adamson
It can be argued that commercially successful technology-based products represent the result of competing marketing power at the point of commercialisation rather than the ability of a particular technology to meet users' needs.  The contest between Beta and VHS video cassette recordings is widely cited.  Such cases suggest that the success of a product has no necessary relationship to the quality of its underlying technology.  This paper argues that the Internet can be considered an example of what happens to a technology-based service when technology selection is not determined through a commercialisation process.  The paper finds an absence of strong commercial direction during the Internet's development period.  By the time of its commercialisation which this paper places at 1995, the Internet already had tens of millions of users, and the commercialisation process has not fundamentally changed its pattern of usage.  It compares this with radio and television service developments and concludes that for most communication media technologies the process of commercialisation strongly influences the underlying technology on which the successful commercial service is based.  The Internet by contrast retains a predominance of useful over commercially necessary technical characteristics.
Address Management for Biggish Networks, by Karl Auer
This presentation describes how ETHZ, Switzerland's largest university, leverages its central repository of address and name information using DHCP, DNS and DDNS to implement a fully dynamic and largely automated address and name management system.  The bigger and more volatile your network, the more you stand to gain from automating address management.  DHCP makes desktop and laptop configuration much simpler.  This is a huge saving for an organisation like ETHZ that installs thousands of machines every year, or where many machines, especially laptops, change networks every day.  But DHCP is just the bottom layer.  Someone still has to manage the address spaces as the network changes.  Addresses without names are not friendly, so name management is a big part of address management.  So centralise your addressing information.  Internalise, evangelise and formalise that "dynamic is good".  Resist the temptation to have static addresses or names except where absolutely necessary.  There are trade-offs, but dynamic address allocation, done properly, will save you huge amounts of time and money.  If you are planning to go to IPv6, going dynamic now will save much pain later.
Developing Efficient Backup Strategy For Centralized User File System, by YDS Arya
In an organization with few hundred/thousand users, user files are stored on one system, called File Server.  This facilitates convenience for users to work on various systems in the organization using one logical disk area (home directory).  However, this convenience requires the system, Filer Server, to be highly available with all its data to be intact against H/W, S/W, and human failures.  Proper Backup is the ultimate solution for user data loss.  But, Backup has its overheads in terms of media cost and system time overhead.  The system administrator has to judiciously arrive at right backup policy to meet the user needs and organization?s financial constraints.  Users have to be informed in advance about the backup policy which specifies the extent of user data loss and recovery time period needed.  The paper discusses failsafe measures available with popular storage devices to get a feel of what we already have at hardware level.  The measures to be applied for maximizing protection at hardware level are discussed next.  The necessity of user file system backup followed by the discussion of developing a backup plan is taken up next.  In the end, a case study of a large educational institution (around 5000 active users) using latest backup devices (MSL 6000 from HP and Storedge L-40 from Sun) and most recent software (Data Protector and Solstice), is taken up for demonstration.
Open Source ETL with Kettle, by Jonathon Coombes
Service-Oriented Architecture: A look under the hood, by Jonathon Coombes
Building the Australian Grid, by Frank Crawford
The Australian Partnership for Advanced Computing (APAC) is the peak facility for High Performance Computing in Australia, hosting the most powerful system in Australia.  APAC also has partner sites in each state in Australia (VPAC, ac3, TPAC, etc), who also run very powerful systems.  APAC, like most HPC organisations throughout the world are implementing a High Performance Grid to seamlessly link these systems.  This presentation will outline the technologies involved and in particular will describe the gateway systems, which have been established to provide a common link to all systems, while also providing a common security framework.
SMTP or HTTP tar pits?  Which one is more efficient in fighting spam?, by Tobias Eggendorfer
Currently, unsolicited commercial email (UCE, spam) is the biggest threat to email communication.  More than 80% of all emails sent are spam.  There are different attempts to fight spammers, one is to delay the collection of email addresses by setting up HTTP tar pits, another one is to delay the transmission of spam by setting up a SMTP tar pit.  I compare those two methods with a view to their efficiency.  I will then analyse how SMTP and HTTP tar pits could be combined to increase their efficiency and provide results from a real world test.
Dynamic obfuscation of email addresses?  A method to reduce spam, by Tobias Eggendorfer
Filtering unsolicited commercial email has proved its untrustworthiness by not keeping spam out of the inbox and marking s olicited mails as unsolicited.  As filters only cure a symptom of the spam-epidemic, it seems more promising to resolve the problem at its roots.  According to different studies, spammers rely on email addresses published in a machine readable format on the world wide web.  After having tested different approaches to obfuscate email addresses in the www, experimental results indicate their usefulness [1].  However, modifying existing webpages only to obfuscate mail addresses on them means a lot of work and forces web designers to understand the techniques and concepts used to conceal addresses.  Instead this paper provides an output filter for the common Apache webserver, which allows to obfuscate mail addresses even on dynamically generated web pages without the need to change any page.
Implementing a caching HTTP proxy with an anti virus content filter on a bridge, by Tobias Eggendorfer
Almost everywhere where people have access to the internet, they will access the world wide web to browse web pages.  In most situations, there is a certain percentage of webpages requested more than once.  To reduce external traffic it is common practice to install a caching HTTP proxy.  As a proxy operates on OSI level 7, it is able to understand content passing through.  Therefore, a proxy is able to serve as a content-filter.  To filter content might be the consequence of different requirements, one might be censoring material intended for adults for example in a school environment, another might be security: Windows computers are likely to be infected with viruses, worms and trojans, through security leaks for example in Microsoft's Internet Explorer.  Implementing a virus scanner on the proxy could reduce this risk.  To install a proxy in an existing network often means reconfiguration of some parts of the network, even a so called transparent proxy would require changing the firewall.  To offer the possibility to add a security proxy without the need to change anything besides pluging the network wire into the proxy's network adaptor, a proxy might be installed on a bridge.  This paper gives an explanation of how to implement a transparent, caching proxy with anti virus content filtering on a bridge and discusses advantages of different implementations.
L4/Darwin: Evolving UNIX, by Charles Gray

UNIX has remained a mainstay of modern computing.  With its foundations of security, reliability, performance and configurability UNIX has adapted to, and is used in a vast array of environments.  While UNIX fosters robustness, modularity and a "smaller is better" philosophy, that scrutiny is generally not applied to the kernel itself.  Modern UNIX kernels are large, unwieldy code bases that do not enjoy the benefits seen in the user environment.

Apple's Darwin kernel is the open source core of the Mac OS X operating system.  Like most modern UNIX systems, the kernel boasts support for modern features such as 64-bit, robust hot-plug and support for server workloads.

Darbat, the L4/Darwin port, aims to address the problem of the ever-growing UNIX kernel.  Using the high-performance L4 microkernel, Darbat can isolate kernel modules such as device drivers using hardware protection while maintaining binary compatibility.  This modularisation also allows Darbat to use L4 as an advanced hypervisor to support multiple operating system instances for server consolidation.

This talk will cover the on-going design and implementation Darbat project and their experiences bringing the strengths of UNIX into the UNIX kernel itself.

Phishing Tips and Techniques: Tackle, Rigging, and How & When to Phish, by Peter Gutmann (University of Auckland, NZ)

This talk looks at the technical and psychological backgrounds behind why phishing works, and how this can be exploited to make phishing attacks more effective.  To date, apart from the occasional use of psychology grads by 419 scammers, no-one has really looked at the wetware mechanisms that make phishing successful.  Security technology doesn't help here, with poorly- designed user interfaces playing right into the phishers hands.

After covering the psychological nuts and bolts of how users think and make decisions, the talk goes into specific examples of user behaviour clashing with security user interface design, and how this could be exploited by attackers to bypass security speedbumps that might be triggered by phishing attacks.  Depending on your point of view, this is either a somewhat hair-raising cookbook for more effective phishing techniques, or a warning about how these types of attacks work and what needs to be defended against.

(Warning: Talk may contain traces of cognitive psychology.  Keep away from small children).

Building Web2 Applications with django, by Ian Holsman
Web 2.0 is a set of interrelated technologies which, when used together can empower the consumer.  Django is a web application framework written in Python, which while not initially designed for web2.0, can allow programmers to get their applications written quickly and which can use the ideas that web 2.0 publicises.  Zyons is an example of one of those applications.
The Economics of Open Source, by Lev Lafayette
Economics is the study of the production, distribution and consumption of resources and is distinct from the discipline of commerce.  Whilst the study of commerce is concerned with maximising profit for the single organisation, economics is concerned with the most efficient aggregate result.  From this basic distinction, the competing models of proprietary knowledge versus open source knowledge can be analysed initially according to the role of reflexive labour and then according to monopolistic versus competitive markets, comparative advantage, productivity, supply and demand equilibrium and elasticity, and finally the role of anti-trust legislation.
Introduction to the MySQL Falcon OLTP storage engine, by Arjen Lentz
Can Open Systems lower your car insurance ... AND save your life?, by Paul McGowan
This talk is about an idea for implementing large scale, self managed vehicle location logging for the purposes of providing feedback about driving performance to at risk drivers (among others).  Following on from the collection of the data is a selection of possible uses for it, and some of the ramifications of having it for road safety, insurance, law enforcement, privacy, etc, as well as a number of business opportunities that logically present themselves once the data exists in sufficient quantity.  Just for good measure, I might tackle the possibilities for both committing (and therefore preventing) fraud with the data.  The implemenation (in its current incarnation) is based on GPS position, velocity and time (PVT) data acquisition via standard logging devices.  (It is also entirely theoretical at this time.)  That bit is, of course, old technology.  The interesting thing (from my point of view anyway) is not the data itself, but what useful things can be done with it, and why it is important to recognise the ones we don't want so we can have a good crack at preventing them.  My talk will cover some innovative uses of this data, and some of the important privacy implications of such a system and how Open Systems can be used to address these issues to capture useful data without compromising the driver's security or privacy.
Open Network Platform, by Andrew McRae
Much research in networking takes place outside of routers and network switches, mostly in hosts.  This can cause constraints due to the fact that the researcher cannot effectively incorporate new applications or test facilities into proprietary equipment, because of the closed nature of the equipment.  Alternatives such as using dedicated Linux hosts and the like are not always possible due to the lack of performance or lack of sophisticated networking features.  This paper presents a framework that exists on the Netdevices mid-range router designed to allow customers and researchers much closely integration of their applications and research tools to the heart of the packet processing engine.  The intent is to allow customers and researchers to run their code actually within the router, for maximum effect, but in a way that is secure and robust.
A Linux Task Manager, by Andrew McRae
The Netdevices product is an embedded cluster of CPUs that are treated externally as a single router.  Internally, services and applications can run on and migrate between different CPUs.  To facilitate this internal cluster, a Task Manager was developed that provided a high level of process management, such as active process watchdogs, service migration to different CPUs, automatic process restart, process grouping, group start/stop etc.  Another concept that was implemented was a services directory, where processes automatically discoveres which CPU provides selected services via internal IPC, and to rehome clients when services move to different CPUs due to CPU hotswap or failure.  This paper describes the technical details underpinning this Task Manager.
Asterisk Tools for OSX, by Devraj Mukherjee
Asterisk is an implementation of a telephone private branch exchange (PBX) that works both with traditional phone systems as well as voice over IP (VoIP).  Asterisk is incredibly flexible and has quickly gained popularity for building customised PBX solutions.  Asterisk provides a TCP socket based management interface that talks a key value style protocol.  The Asterisk Manager Interface (AMI) allows integration of third party applications with Asterisk.  The protocol is fairly well documented all over the web and there are various implementations of the API in possibly all commonly used programming languages.  Asterisk Tools for OS X was an attempt to integrate the Asterisk PBX with OS X users in a seamless way.
Network Monitoring Tool to Identify Malware Infected Computers, by Navpreet Singh

These days most of the Organizational Networks are facing a critical problem.  Lately there has been a lot of increase in Malwares such as worms, adwares, spywares etc., which get installed on the users PC and generate Network and Internet traffic without the users' knowledge i.e. in the background.  As a result, the overall utilization of the network, specially the Internet link, gets drastically minimized due to this unwanted traffic.

This tool monitors the network traffic and identifies all the (activ e) computers on the network which are infected with any kind of Malware.  It provides the IP Address, MAC Address and type of infection for the identified hosts.  There may be some hosts for which may it not be able to provide information on the type of infection, but it is able to identify them.

Hacked slugs, solving all your problems with little NAS boxes, by Michael Still
This talk will discuss how to get your own version of Linux running on a Linksys NSLU2, known to the Linux community as a slug.  This is a consumer grade network attached storage (NAS) system.  These devices are quite inexpensive, are physically small, and run on low voltage DC power.  I also discuss how to handle having your firmware flash go bad, and provide some thoughts on projects made possible by these devices.  The conference presentation will also include extra demonstrations of the process of flashing and setting up one of these devices.
Managing machines the slack way, by Michael Still
This talk will discuss some aspects of how Google handles it's internal Unix installations, specifically it's approach to infrastructure design, and repeatable application installs.
Operations Documentation Framework, by Michael Strong
In a previous paper at AUUG's conference in 2005, I discussed good documentation and how you can do it yourself.  My focus was primarily on the form, construction and structure of the documentation rather than the content.  I can't say that it was intentional, but that now presents itself as a foundation for a broader series of presentations on system management and the part systems documentation plays in delivering effective system management.  This paper is the second installment in that series.  In this paper, I discuss what should be in good systems documentation, why it should be there and the corollary question 'what do you expect a professional system administrator to have provided'?
Wireless Insecurity 2006 by Neal Wise
This presentation discusses basics of 802.11 wireless services, current trends and threats in wireless security and risk mitigation strategies.  Neal has been collecting wardriving statistics on Melbourne since 2001.  Neal will share some results along with demonstrating how to incorporate UNIX-based wireless attack tools and techniques into your pro-active wireless defense strategy.
A Consistent Approach to Measuring System Hardening by Neal Wise and Adam Pointon
"Hardening" or "purposing" systems isn't a new concept.  For UNIX-like systems some combination of platform-specific and general techniques (plus just plain breaking things) is required to attain a true "least privilege" result.  This presentation walks through some methods for "hardening" popular systems and introduces a concept for rating your results.

Conference Panels: 11-13 October 2006

IPv6 Adoption, chaired by Andrew McRae
A panel discussing issues related the takeup of IPv6, now that support for it is increasingly being provided in commodity O/S's.  How fast is it, will it happen?
Security Issues, chaired by Lawrie Brown
Security is a continuing topic of concern.  This panel will look at what are some of the current issues and responses to them.