header image
The world according to cdlu


acva bili chpc columns committee conferences elections environment essays ethi faae foreign foss guelph hansard highways history indu internet leadership legal military money musings newsletter oggo pacp parlchmbr parlcmte politics presentations proc qp radio reform regs rnnr satire secu smem statements tran transit tributes tv unity

Recent entries

  1. A podcast with Michael Geist on technology and politics
  2. Next steps
  3. On what electoral reform reforms
  4. 2019 Fall campaign newsletter / infolettre campagne d'automne 2019
  5. 2019 Summer newsletter / infolettre été 2019
  6. 2019-07-15 SECU 171
  7. 2019-06-20 RNNR 140
  8. 2019-06-17 14:14 House intervention / intervention en chambre
  9. 2019-06-17 SECU 169
  10. 2019-06-13 PROC 162
  11. 2019-06-10 SECU 167
  12. 2019-06-06 PROC 160
  13. 2019-06-06 INDU 167
  14. 2019-06-05 23:27 House intervention / intervention en chambre
  15. 2019-06-05 15:11 House intervention / intervention en chambre
  16. older entries...

Latest comments

Michael D on Keeping Track - Bus system overhaul coming to Guelph while GO station might go to Lafarge after all
Steve Host on Keeping Track - Bus system overhaul coming to Guelph while GO station might go to Lafarge after all
G. T. on Abolish the Ontario Municipal Board
Anonymous on The myth of the wasted vote
fellow guelphite on Keeping Track - Rethinking the commute

Links of interest

  1. 2009-03-27: The Mother of All Rejection Letters
  2. 2009-02: Road Worriers
  3. 2008-12-29: Who should go to university?
  4. 2008-12-24: Tory aide tried to scuttle Hanukah event, school says
  5. 2008-11-07: You might not like Obama's promises
  6. 2008-09-19: Harper a threat to democracy: independent
  7. 2008-09-16: Tory dissenters 'idiots, turds'
  8. 2008-09-02: Canadians willing to ride bus, but transit systems are letting them down: survey
  9. 2008-08-19: Guelph transit riders happy with 20-minute bus service changes
  10. 2008=08-06: More people riding Edmonton buses, LRT
  11. 2008-08-01: U.S. border agents given power to seize travellers' laptops, cellphones
  12. 2008-07-14: Planning for new roads with a green blueprint
  13. 2008-07-12: Disappointed by Layton, former MPP likes `pretty solid' Dion
  14. 2008-07-11: Riders on the GO
  15. 2008-07-09: MPs took donations from firm in RCMP deal
  16. older links...

All stories filed under conferences...

  1. 2004-04-14: Real World Linux 2004, Day 1: A real world experience
  2. 2004-04-15: Real World Linux 2004, Day 2: Keynotes
  3. 2004-04-16: Real World Linux 2004, Day 3: The conclusion
  4. 2004-05-10: OS conference endures PowerPoint requirement on Day 1
  5. 2004-05-11: Red Hat, Microsoft clash at open source conference
  6. 2004-05-12: Creative Commons highlights final day of OS conference
  7. 2004-07-22: Ottawa Linux symposium offers insight into kernel changes
  8. 2004-07-23: Linux symposium examines technicalities of upcoming Perl 6
  9. 2004-07-24: OLS Day 3: Failed experiments, Linux­Tiny, and the Linux Standard Base
  10. 2004-07-25: Ottawa Linux Symposium day 4: Andrew Morton's keynote address
  11. 2005-04-19: LWCE Toronto: Day 1
  12. 2005-04-20: LWCE Toronto: Day 2
  13. 2005-04-21: LWCE Toronto: Day 3
  14. 2005-07-22: Ottawa Linux Symposium, Day 2
  15. 2005-07-23: Ottawa Linux Symposium, Day 3
  16. 2005-07-25: Ottawa Linux Symposium, Day 4
  17. 2006-04-25: Security and certification at LinuxWorld Toronto
  18. 2006-04-26: Wikis, gateways, and Garbee at LinuxWorld Toronto
  19. 2006-04-27: Wine, desktops, and standards at LinuxWorld Toronto
  20. 2006-07-10: PostgreSQL Anniversary Summit a success
  21. 2006-07-20: First day at the Ottawa Linux Symposium
  22. 2006-07-21: Day two at OLS: Why userspace sucks, and more
  23. 2006-07-22: Day 3 at OLS: NFS, USB, AppArmor, and the Linux Standard Base
  24. 2006-07-23: OLS Day 4: Kroah­Hartman's Keynote Address
  25. 2007-06-25: DebConf 7 positions Debian for the future
  26. 2007-06-28: Day one at the Ottawa Linux Symposium
  27. 2007-06-29: Kernel and filesystem talks at OLS day two
  28. 2007-06-30: Thin clients and OLPC at OLS day three
  29. 2007-07-02: OLS closes on a keynote
  30. 2007-10-15: Ontario LinuxFest makes an auspicious debut
  31. 2008-07-24: Ottawa Linux Symposium 10, Day 1
  32. 2008-07-25: OLS: Kernel documentation, and submitting kernel patches
  33. 2008-07-28: OLS 2008 wrap-up

Displaying the most recent stories under conferences...

OLS 2008 wrap-up

Day 3 of this year's Ottawa Linux Symposium featured a number of sessions, most notably a keynote address by Ubuntu founder and space tourist Mark Shuttleworth, who called for the greater Linux community to start thinking about discussing syncronicity, his term for having major software releases synchronised. The conference wrapped up on Saturday with some final interesting sessions and statistics.

Shuttleworth was, per OLS tradition, introduced by 2007 keynote speaker James Bottomley, who showed a graph of Shuttleworth's Linux kernel­related maililng list contributions over the years, noting three years in which nothing happened ­­ the first in which he received half a billion dollars, the second in which he was "not on planet Earth," and the third in which he was busy founding the Ubuntu Linux distribution.

Shuttleworth's talk was called "The Joy of Syncronicity." It was a visionary statement about how to grow the Linux market for everyone and reduce software development waste. As the world changes, so too must we, he said.

Development has to be driven by three major factors, Shuttleworth said: Cadence, Collaboration, and Customers.

Cadence is the pace of release of any given project. It is a regular, predictable time at which the next version will be released, a release cycle tied to the calendar. For example, Ubuntu is targeting a six­month release cycle for point releases with a predictable two­year release for major releases, he said. GNOME is good at this, though its initial attempts were met with some difficulties. It is now on a six­month cycle, and KDE is beginning to explore the idea.

Syncronocity, for Shuttleworth, is all about collaborating the cadences of several projects for the benefit of the customers. If the Linux kernel, gcc, KDE, and GNOME, to start, were always at the same version in each co­released Linux distribution, Shuttleworth argued, it would reduce code waste and help grow the Linux market for everyone.

The point is simple. In Shuttleworth's vision, distributions would all be on the same versions of major software, but would always retain their other traits, differentiating them from each other and keeping the diversity of Linux distributions as lively as it is today. The predictability of releases would help all around.

Kernel developers, he argued, would have an easier time developing if they knew exactly which versions of the kernel would be used when by what distributions. The same would apply to all aspects of the open source community.

Shuttleworth expressed hope that such a predictable, marketing­friendly setup would grow the total Linux market for every distribution and market.

Sustainable Student Development in Open Source

Earlier in the day I attended an interesting talk by Chris Tyler of Seneca College who discussed a strategy the school has developed to educate students in open source technology and development.

Many students get involved in open source software and the community on their own, but do it outside of their coursework. Seneca College, from Tyler's explanation, has been looking to incorporate open source development directly into the curriculum. Under their system, senior year college students at Seneca are offered a list of open source projects seeking help to choose from to contribute to as their class projects.

Most of the efforts so far have been within the Mozilla project. One thing Tyler noted is that students are not used to large projects. Thousands or tens of thousands of lines of code is something that students can grok, and understand right through. But once they start dealing with larger projects, like Mozilla, which are in the millions or tens of millions of lines, there is too much code for any one person to know right through.

The other side to that is that faculty can also be overwhelmed. It is critical, Tyler noted, that faculty involved in this program be both familiar with the academic environment, as professors necessarily are, and integrated with and active in the open source community. Without that intrinsic understanding of the community, faculty members cannot be expected to do well. To that end, Tyler commented that other institutions have contacted his department about using the curriculum, but they are advised that it is not the curriculum that makes the project a success, but that integration between the faculty and the open source community.

A significant difference that Tyler noted between open source projects and normal assignments for students is that in a typical assignment, the student is responsible for the complete coding project, from design to implementation. In an open source project, they can be using code that already exists as part of a larger project and is as much as 20 years old.

While Tyler indicated that open source was clearly not for all students, some of whom are not happy working on group projects in that way, he said the successes far exceeded the failures. He cited a number of examples, one of which was of a student who took on the challenge of documented a previously undocumented API. This led to the question of how such assignments are graded. Tyler explained that the marking is done as an assessment of the contribution to the open source project and the accomplishment of the student's stated goals. Thus it does not necessarily have to be a coding project to be a successful project.

Another example he cited was a student who developed an animated Portable Network Graphics implementation which he called apng. It was less cumbersome than MMG, the PNG project's implementation of the same task, and has been merged into Firefox, Opera, and soon Microsoft Internet Explorer, although it was rejected by the PNG group itself.

The course requires real contribution to real, existing open source projects as a normal new contributor. A critical component of the course is to encourage the developers and the students to interact on an ongoing basis, preferably actually meeting in person at some point, during the course. As one example, he noted that students and developers of Mozilla interact on an ongoing basis in the #seneca channel on irc.mozilla.org.

Tyler said that the course works within an open source philosophy in its own right. The course notes and outline are posted on wikis, the projects are developed with them, and coursework is submitted through a developer blog aggregator, with each student required to create an aggregated blog to cover his progress. This setup also allows other members of the projects involved to keep up the accuracy of the course and project information.

More information about his project can be found on Tyler's own blog.

Peace, love, and rockets

Worth brief mention is Bdale Garbee's talk on using open source and open hardware to build a useful telemetry system for model rockets. Garbee spent some time outlining the model rocket hobby and explaining the shortcomings of altimeters and accelerometers currently available, namely that they are not easily hackable. He said he has been told that his main hobby is turning his other hobbies into open source projects.

The fourth and final day of Ottawa Linux Symposium started for me with an entertaining trip down memory lane by D. Hugh Redelmeier in a talk entitled "Red Hat Linux 5.1 vs. CentOS 5.1: Ten Years of Change." Redelmeier took Red Hat Linux 5.1, released in June 1998, and compared it on the same computer to CentOS 5.1, a free version of Red Hat Enterprise Linux 5.1 released late last year. He chose the two systems because of the time separation, direct lineage, and coincidentally numbered versions of the two operating systems. He compared them by dual booting them on a 1999­built Compaq Deskpro EN SFX desktop machine with 320MB of RAM, upgraded from the original 64MB, and a 120GB hard drive, upgraded from the original 6.4GB drive that came with the machine.

Redelmeier described installing two versions of a Linux distribution nearly a decade apart in age on the same hardware as a "bit of a trick." For example, he said, Red Hat 5.1 only understood hard drive geometry as CHS ­­ Cylinders, Heads, Sectors. How many people remember CHS, he asked? The standard bootloader at that time, LILO, had to be installed on a cylinder below 1024. On a 120GB drive, that meant ensuring that /boot showed up in the first 8.5GB of the drive. Except that Red Hat 5.1 had not yet introduced the concept of /boot as a separate partition ­­ that did not come until 5.2 ­­ and so the root partition needed to be in the first 8.5GB of the drive ­­ a relic of old AT BIOSes.

Among his other surprises were that CentOS 5.1 and Red Hat 5.1 could not share a swap partition. Red Hat 5.1 could not read the CentOS swap partition without running mkswap on boot, which is not a normal boot procedure. Red Hat 5.1, he noted, was limited to a 127MB swap partition anyway. That version of the distribution could be installed in 16MB of RAM, so 127MB of swap seemed like an awful lot at the time. The computer Redelmeier chose did not have an optical drive, and so he was forced to install CentOS 5.1 using PXE boot. CentOS also requires a yum update once installed, which he described as very slow on that machine.

His observations from the process include noting that GRUB is generally better than LILO, as he had an opportunity to re­experience such entertainment as "LILILILILI..." as a LILO boot error.

Redelmeier indicated that he has been using Unix in some form or another since 1975. Considering that, he said, the Red Hat 5.1 Unix environment is "pretty solid." There were a "few stupidities" he said, "like colour 'ls'." Looking at it now, he noted, FVWM, the window manager in Red Hat 5.1, had an old feel to it. Another age­old piece of software, xterm, he said, was still mostly the same, except that in Red Hat 5.1, xterm had been improved slightly to use termcaps ­­ which broke it when you tried to use it remotely from, for example, Sun OS.

Red Hat 5.1 did not come with SSH; at the time it still had to be downloaded from ssh.fi. To log into the machine, he used rlogin with Kerberos. OpenSSH requires openSSL, and a newer version of Zlib than was available for Red Hat 5.1, something he was not inclined to backport. Redelmeier warned of "cascading backports" when trying to use newer software on such old installs.

Security, too, is quite bad in the original Red Hat 5.1, he commented, but the obscurity factor largely made up for it.

Another lesson he learned in the process of comparing the installs is about "bitrot." Redelmeier commented that the original pressed CDs that came in the box still worked fine, but his burned update CD had bonded to the CD case and was no longer usable. Avoid bitrot, he cautioned, by actively maintaining stuff you care about.

Issues in Linux mirroring John Hawley, admin for the kernel.org mirrors, spoke in the afternoon about "problems us mirror admins have to scream about."

Not every mirror has 5.5 terabytes of space to offer the various distributions that need mirroring, Hawley said. Some mirrors only have as little as one terabyte to offer. Yet in spite of this, many distributions leave hundreds of gigs of archival material on mirrors. Hawley asked that distributions make it optional to mirror admins whether or not they take these archives. Fedora and Mandriva alone, he noted, use up fully half of his mirror space, while Debian, at a paltry half terabyte, has cleaned up its act on request. He warned that if other distributions don't start reducing their mirror footprint, mirrors will no longer be able to carry them.

Disk cache is a major constraint on mirrors, Hawley warned. Disk I/O is the most significant part of any mirror operation. No mirror can keep up, he noted, with 2,000 users downloading distributions at the same time if the servers are not able to cache the data being sent out. Cache runs out, I/O use goes up, disk thrashing begins, the load goes up, and it is nearly impossible to get it back under control without restarting the HTTP daemon.

Keep working sets as small as possible, Hawley asked. His servers have 24GB of RAM, yet a distribution today can be 50GB. To be able to distribute the whole distribution means that some of it necessarily has to be gotten from disk at any given time, since only half of it can fit in RAM. Add multiple releases at the same time, and pretty soon mirrors are no longer able to keep up.

Hawley asked that distributions coordinate not to release at the same time. "I don't care what Mark said ­­ it's bad!" Hawley exlaimed, in reference to Mark Shuttleworth's keynote. Last year, Hawley noted as an example, Fedora, openSUSE, and CentOS all released within three days of each other, swamping mirrors. When that happens, he said, "we are dead in the water." Please, he said, when doing releases, coordinate with other distros so as not to release the same week.

Hawley strongly suggested that distributions need to learn to keep mirror operators in the loop on release plans. Sometimes, he said, the only way he knows that one of the distributions he is mirroring has released a new version is by the spike in traffic on his mirrors. When a distribution is preparing to release, he suggested sending repeated loud, clear emails to mirror admins to warn them of this fact.

And then Hawley really got started. Hawley said he does not know of many admins who like BitTorrent.

Users think it's the best thing since sliced bread, and distributions and mirror admins are answering that demand. But Hawley would rather that users be informed as to what is wrong with BitTorrent.

So why is BitTorrent considered harmful?

The original idea, Hawley said, is to allow multiple users to download from the other people downloading.

This, he said, is great for projects with large datasets but small numbers of downloads. But once the volume rises, BitTorrent "falls flat on its face." Every client needs to talk to the tracker to get the source of its next segment and check the checksums of what it has. The tracker itself becomes a single point of failure, and a bottleneck to the download. There's no concept in BitTorrent of mirrors versus downloaders, as everyone takes on both roles. This also means that any user of BitTorrent sinks to the lowest common denominator. If, for example, in your cloud of downloaders, there is a 56K modem user, that user can slow down the rest of the users' downloads considerably as they wait to get chunks out of that modem.

BitTorrent, Hawley said, is complex for everyone. It adds manual labour to set it up to work on the mirrors, it is slow to download, and he explained that BitTorrent as a whole cannot even keep up with a single major mirror.

With graphs to back him up, Hawley showed that in the first week of Fedora 8's release, the total number of downloads by BitTorrent of the release across all sources was roughly equivalent to the total number of downloads from only the kernel.org mirror for BitTorrent, yet some 25% of all bits traded in BitTorrent for Fedora 8 still came from the kernel.org mirrors.

Among its problems, BitTorrent is a largely manual process to set up for mirror admins. BitTorrent does not inherently have a way to automatically detect and join existing torrents, nor does it have an easy way to create a torrent from existing data. Aside from that, its chunk approach to data distribution causes disk thrashing on the servers. Per download, he said, BitTorrent is 400 times more intensive than a single direct download from a mirror, largely on the client side, because of its weird disk seeks.

With a Web server, the server can simply use a kernel function called sendfile() to pick up a file and send it. With BitTorrent, a file is divided into small chunks that have to be seeked for constantly and distributed.

If BitTorrent continues to thrash mirrors, he warned, mirrors will no longer participate.

Peer­to­peer distribution for Linux distribution releases has a role, he said, but BitTorrent is not the answer.


This marks the tenth consecutive year of the Ottawa Linux Symposium. Organisers say that 600 people attended this year, in spite of the weak US economy and the scheduling conflict with OSCON, which scheduled itself for the same week as OLS's traditional time slot ­­ and has again for next year. Some attendees at OLS, including keynote speaker Mark Shuttleworth, attended part of each conference to reconcile this conflict.

The Ottawa Congress Centre, where OLS has taken place for the past 10 years, is being torn down and rebuilt over the next three years. As a result, OLS is "going on the road" and will take place in Montreal at the Centre Mont­Royal next year, with dates to be determined.

As per tradition, Craig Ross, one of OLS's two key organisers along with Andrew Hutton, gave the closing announcements and statistics at the end of the last day. In 10 years, there have been approximately 5,000 attendees, 850 talks, 23 calls from embassies, 11 calls from authorities, 2 attendees found asleep in the fountain at the closing reception (alcohol is provided, in case you were wondering), and some 50,000 beverages consumed.

And of course, Ross had to post a slide showing T­shirt sizes issued through the conference ­­ slide photographed by Yani Ioannou ­­ showing the, ahem, enlarging Linux community.

Originally posted on Linux.com 2008-07-28; reposted here 2019-11-22.

conferences foss 2943 words - permanent link - comments: 0. Posted at 10:35 on July 28, 2008

OLS: Kernel documentation, and submitting kernel patches

The second of four days at the 10th annual Ottawa Linux Symposium got off to an unusual start as a small bird "assisted" Rob Landley in giving the first talk I attended, called "Where Linux kernel documentation hides." The tweeting bird was polite, only flying over the audience a couple of times and mostly paying attention.

Landley did a six­month fellowship with the Linux Foundation last year to try to improve the Linux kernel's documentation. He explained that it was meant to be a year, but after six months he had come to some conclusions about how documentation should be done, which he said the Linux Foundation both agreed with and did not plan to pursue, and so he went back to maintaining his other projects.

Where, asked Landley, is kernel documentation? It's in the kernel tarball, on the Web, in magazines, in recordings from conferences like OLS, in man pages, on list archives, on developers' blogs, and "that's just the tip of the iceberg." The major problem is not a lack of documentation, he said, but that what is out there is not indexed.

The challenge in providing useful documentation for the Linux kernel, Landley said, is therefore to index what is already out there. When a source of some documentation for some item gains enough traction, it becomes the de facto source of documentation for that particular subsection of the kernel, and from then on gets found and maintained. But there is a big integration problem, as such sources of documentation are scattered around.

It is hard enough for Linux kernel developers to keep up with the Linux Kernel Mailing List, Landley noted, let alone to read all the other lists out there and keep track of the ever­growing supply of documentation. Putting all the kernel documentation found around the Internet together is itself a full­time job. Jonathan Corbet of LWN, he noted, is good at this already, but there are several people doing it each their own way in their own space.

The Linux kernel developers' blog aggregator, kernelplanet.org, and other aggregators offer a huge amount of information as well, Landley noted. But he said we need to aggregate the aggregators. Google is inadequate for the challenge, he said, as it can take half an hour to find some pieces of information, if you can find them at all, and it only indexes Web pages, not, for example, the Documentation directory in the kernel source tarball.

So what are the solutions? Landley explained how he set up a new page on kernel.org called kernel.org/doc, where all the aggregated documentation is stored in a Mercurial archive and is automatically turned into an indexed Web page. Adding information to this database is a task that requires a lot of editing, Landley said, quoting Alan Cox: "A maintainer's job is to say no." As the maintainer of the kernel documentation on kernel.org, Landley sees himself as mainly responsible for rejecting submissions, as one would with kernel patches. As a tree in its own right, the documentation has to be kept up to date and managed.

Asked why he does not use Wikipedia rather than the kernel.org/doc system, Landley explained that on Wikipedia, you cannot say no, so there's no real quality control on the information available, and it lacks a rational indexing system, which is still the core problem.

Landley said his six­month term with the Linux Foundation ended 10 months ago. While he is still responsible for the section, he no longer has the time to maintain it himself and stated that what is really needed is a group of a dozen or so dedicated volunteers under a maintainer to handle kernel documentation as its own project.

On submitting kernel features

Another interesting but somewhat difficult to understand talk on this day was by Andi Kleen, who gave a brief course entitled "On submitting kernel features."

Kleen, a self­described "recovering maintainer," asked, "Why submit patches?" then said people submit kernel patches for a variety of reasons. The code review involved in submitting patches usually improves code quality. Including code in the kernel allows it to be tested by users for free. Having it in the kernel instead of separate keeps it away from user interface conflicts. And you get free porting service to other architectures if your feature becomes widely used. Getting code into the kernel, he said, is the best way to distribute a change. Once it is in the mainline kernel, everyone uses it.

So how does one go about doing it?

Kleen outlined a few easy steps for submitting features to the kernel, and included two case studies to explain the points. The basic process as he explained it is:

You, the developer, write code and test it, and submit it for review. You fix it as needed based on the feedback from the review. It gets merged into the kernel development tree by the maintainer responsible for the section of the kernel that you are submitting a patch for. It gets tested there. And then it gets integrated into a kernel that is then released.

The basic things to remember when submitting code: the style should be correct and in accordance with the CodingStyle document in the kernel documentation directory found in the kernel source tarball. The submitted patch should work and be documented. You should be prepared for additional work relating to the code as revisions and updates as needed. And expect criticism.

Kleen compared submitting kernel code to submitting a scientific paper to a journal for publication. Getting attention for your kernel patch means selling it well. There is generally a shortage of code reviewers, and the maintainers are often busy. In some cases, you could be submitting a patch to a section of the kernel that has no clear maintainer. So selling your patch well will get you the reviewers needed to get started on the process.

You have to sell the feature, Kleen said, and split out any problematic parts where possible. Don't wait too long to redesign parts that need it, and don't try to submit all the features right off the bat. As his case study, he discussed a system he wrote called dprobes. After a while of it not going anywhere, Kleen resubmitted the patch as kprobes with a much simpler design and fewer features, and the code became quickly adopted.

There are several types of code fixes one can submit, Kleen said. The clear bugfix is the easiest to do and sell. He advised against overdoing code cleanups, because bugfixes are more important. And for optimisations, he suggested asking yourself a number of questions: how much does it help? How does it affect the kernel workload? And how intrusive is it?

In essence, Kleen said, a patch submission is a publication. The description of the patch is important.

Include an introduction. If you have problems writing English, get help writing the introduction and description, he advised.

Over time and patches, the process becomes easier. When a kernel maintainer accepts a patch from you, it means he trusts you. The trust builds up over time. Kleen recommended making use of kernel mailing lists to do development on your patches, and suggested working on unrelated cleanups and bugfixes to help build trust.

Kleen's presentation is available on his Web site.

Originally posted on Linux.com 2008-07-25; reposted here 2019-11-22.

conferences foss 1236 words - permanent link - comments: 0. Posted at 10:39 on July 25, 2008

Ottawa Linux Symposium 10, Day 1

The tenth annual Ottawa Linux Symposium kicked off Wednesday in Canada's capital, just a few blocks from the country's parliament building, in a conference centre in the midst of being torn down. The symposium started with the traditional State of the Kernel address, this year by Matthew Wilcox. Among the dozens of talks and plenaries held the first day was kernel wireless maintainer John Linville's Tux on the Air: the State of Linux Wireless Networking.

The Kernel: 10 Years in Review Matthew Wilcox gave the traditional opening address this year in place of Linux Weekly News's Jonathan Corbet, who has done it for several years and has been a staple of OLS's proceedings. Wilcox introduced himself as a kernel hacker since 1998, whose work history includes stints at Genedata, Linuxcare, and Intel, where he is today.

Getting down to business after a few minutes battling with both the overhead projector and the room's sound system, Wilcox gave a brief history of Linux kernel development. As most regular Linux users know, the kernel 2.6 tree has been around for quite a few years. This is a change from the old way, he explained, where stable releases came out as even numbers such as 2.0.x, 2.2.x, and 2.4.x, with development releases coming out as 2.1.x, 2.3.x, and 2.5.x. Minor kernel releases came out every week or so, with a new stable release approximately every three years. With the 2.6.x kernel, each version is itself a stable release, coming out approximately every three months, each with somewhat less dramatic changes than earlier major releases had.

With the history lesson over, Wilcox explained how kernel development itself is done. With each kernel version, there is a brief merge period, in which tens of thousands of patches are submitted through git, a large and scalable source management utility written specifically for the Linux kernel by Linus Torvalds. The purpose of using git, he explained, is primarily so that everyone is using the same tool. Mercurial, he said, is a comparable tool, but git is preferred for uniformity. CVS, he said, is not scalable enough for the task.

Why should the kernel be changed at all, Wilcox asked. New features are needed, new hardware is made, and new bugs are found. There is always need for change. He noted that 10 years ago, multiprocessor systems were expensive and poorly supported by Linux, but now it is difficult to get a computer without multiple processor cores. And now Linux runs on everything from 427 of the top 500 supercomputers in the world to a watch made by IBM. Over the last 10 years, Wilcox noted, from kernel 2.3 to 2.6.26, Linux has gone from supporting approximately six hardware architectures to some 25.

As an example of Linux's changes over the last decade, Wilcox said that in kernel 1.2, symmetric multiprocessing was not supported at all. In kernel 2.0, SMP support began, with spin locks being introduced in 2.2 to allow multiple processors to handle the same data structures. Kernel 2.4 introduced more and better spinlocks, and by kernel 2.6, Linux had the ability to have one processor write to a data structure without interrupting another's ability to use it.

After a discussion of details about improvements in the Linux kernel from wireless networking to SATA hard drive support to filesystem changes to security and virtualisation, Wilcox wrapped up his presentation when he ran out of time with a summary of improvements in recent kernel releases and what we can expect in the near future.

Since Corbet's talk a year ago, Wilcox said, kernel 2.6.23 introduced unlimited command length. 2.6.24 introduced virtual machines with anti­fragmentation. 2.6.25 added TASK_KILLABLE to allow processes in uninterruptable sleep to be killed, although he said it is still imperfect and some help is needed with that. 2.6.26 added read­only bind mounts.

On the table for the upcoming Kernel Summit, Wilcox said, are asynchronous operations. The future holds Btrfs, better solid state device support, and SystemTap, among other things.

Wireless networking support in Linux

The next session I attended was called Tux On The Air: The State Of Linux Wireless Networking, by John Linville of Red Hat, who introduced himself as the Linux kernel maintainer for wireless LANs. Wireless, he admitted from the outset, is a weak spot in Linux, quoting Jeff Garzik: "They just want their hardware to work."

Linux wireless drivers typically used to have the wireless network stack built right into the drivers, meaning large amounts of duplication, and causing what Linville called "full MAC hardware" to appear to be normal Ethernet devices to the kernel. Full MAC hardware, he explained, is wireless hardware with on­board firmware. Many recent wireless devices have taken after Winmodems, he said, and provide only basic hardware to transmit and receive, with all the work done by the driver in kernels. This he called "SoftMAC."

After a couple of other approaches were tried, SoftMAC wireless device drivers started using a common stack called mac80211. This proved popular with developers and eliminates a lot of duplicated code. Most new wireless drivers in Linux, he said, now use mac80211. mac80211 was merged into the Linux kernel tree in 2.6.22, with specific device drivers using it following in subsequent versions.

Linville showed a chart of some wireless devices and whether or not they used mac80211, and cited vendors for good or bad behaviour in cooperating with the Linux community to get drivers out for their hardware. Good corporate citizens, he said, include Intel and Atheros, while Broadcom, concerned with regulatory issues around opening up its hardware, refuses to help in any way. He repeatedly suggested that we vote with our dollars and support wireless device vendors who make an effort to support Linux.

The regulatory issues, Linville said, are, unfortunately, "not entirely unfounded." Regulations vary by jurisdiction, but the main concern is about allowing device operation outside of the rules to which they were designed. Vendors are expected to ensure compliance with regulations on pain of being shut down. Some vendors, he noted, proactively support Linux in spite of this.

Regulators, he said, are not worried so much about people using wireless devices slightly outside of their normal bounds, but rather about people using wireless devices to interfere with other systems, such as aviation systems. As long as vendors keep such interference difficult to do, they believe they are within compliance with the regulators, and so keeping drivers closed source and effecting security through obscurity helps them achieve that.

Wireless driver development represents a busy part of overall Linux kernel development, Linville said. He noted that his name as the sign­off on patches essentially represents wireless development in the kernel.

In kernel 2.6.24, 4.3% of merged patches were signed off by him, putting him in fifth place. In kernel 2.6.25, this was up to 5.0%, and by kernel 2.6.26, Linville signed off on 5.6% of all merged kernel patches, bringing him up to fourth place.

More information about wireless support in Linux can be found at linuxwireless.org.

That's some of the best of the first of four days at the 10th annual Ottawa Linux Symposium. More tomorrow.

Originally posted on Linux.com 2008-07-24; reposted here 2019-11-22.

conferences foss 1202 words - permanent link - comments: 0. Posted at 11:13 on July 24, 2008

Ontario LinuxFest makes an auspicious debut

The first­ever Ontario LinuxFest, unapologetically modeled on Ohio's conference of the same name, took place on Saturday at the Toronto Congress Centre near the end of runway 24R at Toronto's international airport. With only a few sessions and a lot of quality speakers, the organisers kept the signal­ to­noise ratio at this conference as good as it gets.

The charismatic Marcel Gagné gave the first talk I attended. Gagné started his talk on what's coming in KDE 4.0, which is expected to be released in mid­December, by stating that KDE 4.0 is a radical departure from existing desktop environments, including current versions of KDE.

KDE 4's revamping is based on the premise that user interfaces are not natural or intuitive. We learn to work around the interface instead of designing interfaces that work around us, Gagné said. The best way to evolve desktops going forward is to make them more organic. They should work the way you want them to work.

He then demonstrated KDE 4.0, cautioning that it remains a work in progress. He spent a half hour showing us the various new features already prepared in KDE 4.0. If you like eye candy, KDE's new desktop will keep you happy.

Ts'o keynote

The mid­day keynote address on Linux's past, present, and future came from Theodore Ts'o, the first Linux developer in North America, who joined Linus in developing the operating system in 1991.

Noting the operating system's earliest history, Ts'o jokingly described Linux as a glorified terminal emulator. He asked his audience to name the dates when Debian passed its constitution, Red Hat released version 3.0.3, the release date of the Qt Public License, and when it was that Richard Stallman requested Linux be renamed to Lignux. Amid several guesses, he said 1998, 1996, 1998, and 1996 respectively. Much of this progress, he said, is already 10 years ago.

Ts'o gave a brief history of Linux: In July 1991, Linus wrote the very first version. In 1992, X was added and the first distro was created. In 1994, Linux 1.0 was released and for the first time included networking support. A year later 1.2 came out; it was the first kernel to have multiplatform support, with the addition of SPARC and Alpha. In 1996, multi­CPU (SMP) support was added. In 1997, Linux magazines began to show up, and the user base was estimated at around 3.5 million people. In 1998, Linux received its first Fortune article, gaining corporate attention. In 1999, Linux 2.0 came out, and its user base had risen to an estimated 7 to 10 million people. That year also saw the dot­com bubble and the rise of the Linux stocks such as Red Hat and VA Linux, the latter of which gained a record 698% the day it began trading. Briefly, Ts'o said, VA had a larger market cap than IBM.

In 2000 the slump began, but lots of cool work was still being done on Linux. By 2001, he said, Linux was used by an estimated 20 million users. In January 2003 Linux 2.6 was released and a new release model was adopted. Linux began to be taken for granted by corporations. 80% of Sun Opterons were running Linux rather than Solaris. And, around then, SCO starts its lawsuit.

Today, Ts'o said, we are into our second round of 2.6 kernel­based enterprise Linux distributions. There's more competition. Vista's unpopularity has resulted in Microsoft extending Windows XP's life by an extra six months. Its failure is an opening for Linux. Sun is starting to get open source too, open sourcing Java.

Sun, Ts'o said, is releasing Solaris under a GPL­incompatible license, and 95% of Sun's code is still developed by Sun. Sun is worried about quality assurance, he said, but so is the free software community. Sun, he said, used to have a policy that if your code commits broke build three times, you were fired. It's hard, he commented, to move from that environment to open source.

Ts'o's employer, IBM, is by contrast a small part of the community, he said, but is happy with that. It takes the attitude that it doesn't have to do everything itself.

And SCO has declared bankruptcy, Ts'o said to widespread applause in the room. But that said, open source software faces legal issues, particularly with the US Digital Millenium Copyright Act (DMCA) that the US government has been attempting to export, most recently to Australia. Now there is a patent troll suing Novell and Red Hat. Trade secrets are a problem, with the DMCA's limits on reverse engineering, he says. Our defence as a community, he warned, is to get involved in the political process Next, Ts'o waded into the debate between the GNU's GPLv2 and GPLv3.

GPL version 3, he said, is now three months old, and it is clear that GPL version 2 is not going away. The result of this is that there are now two separate licenses. Kernel developers, he said, are not fond of version 3. He summarised the debate in two words: "embedded applications." Linux is in data centres and pretty much owns the Web serving market, Ts'o said. Embedded systems and desktops are Linux's future markets.

Ts'o said he's on the kernel side of the debate. From the GPLv2 view, he explained, we want embedded developers to use and contribute back to Linux so we can all do the stone soup thing and all make it good.

From the GPLv3 view, he went on, the mission is to allow all end users to be able to use embedded appliances with their own changes in the systems. TiVo, for example, has checksums to make sure changes are not made and this is a violation of freedoms. The rebuttal to this, he continued, is that if we use v3, appliance vendors will simply go elsewhere. We love our contributions, he said, and the developers' priority is software, not hardware. Open architecture companies will tend to survive better, he said. Most TiVo users won't make the hacks, but that 0.01% that do enhance value for the rest.

This argument will not be settled, Ts'o asserted. It's a values argument, a religious one. GPLv3 adds restrictions, he said, and that means that GPLv2 sees GPLv3 as a proprietary license. The FSF, he said, argues that these restrictions are good for you.

GPLv3 code can have v2 code mixed in with permission, he said, but what happens if an LGPL v3 library is linked to a GPL v2 application? Is it legal? Now developers have to worry about GPL compatibility that undercuts instead of establishing new markets. It's kind of a shame, he concluded.

Competition within the existing Linux markets is becoming a problem, Ts'o said. In the early days, Bob Young, founder and one­time CEO of Red Hat, used to hand out CDs for competing distributions at events. He described this as "growing the pie," said Ts'o. The bigger the pie, the more Red Hat's piece of it was worth. The competition right now, though, is causing him some worry, he said. He called it the tragedy of the commons.

Some companies are doing the hard work while others are reaping the reward, he said. As some companies do the research and development and make the results open source, others are picking up the work and selling it in their own products. It's perfectly legitimate, Ts'o said, but there is a risk of companies ceasing to do the work if they are not the ones benefitting from it. Will there be enough investment in the mainline kernel to sustain it? Not enough people, he said, are doing code review as it is.

Who does the grunt work?

Open source is good at fixing bugs and making incremental improvements, Ts'o said. Massive rewrites are more difficult. As an example, he cited the block device layer of the Linux kernel. A need was identified in 1995 to rewrite this, but it was not done until 2003. Kernel 2.6 fixed this piece of the kernel, but required rewriting many parts of the kernel to accommodate the new system.

Most major open source projects, he said, have paid people at the core. Linux and GNOME are mostly funded engineers, he said, although KDE is more of a hobbyist project than the others. At some conferences, such as the Ottawa Linux Symposium and LinuxWorld, it is easy to think that corporate dollars are all there is, while conferences like FOSDEM and Ohio LinuxFest are not the same way. The funding of some open source developers and not others can cause tension, he said, noting Debian's controversial Dunc­Tank project, which aimed to pay some staff full­time temporarily to facilitate a faster release. Many people inside Debian did not like this. While some Debian developers are paid by outside companies such as Canonical, money should not invade the inner sanctum of Debian, Ts'o said people felt. How do you work within this division? We need both the hobbyists and the corporates, he said.

Moving on, he warned the audience to be wary of Microsoft. Vista's failure does not mean Microsoft is sitting on its hands. A few years ago, he pointed out, Sun was seen as dead. Now look at it. Microsoft has a lot of money, he said, and will not always make stupid moves.

Software has to be easy for everyone to use. Microsoft spends millions on usability tests. He described a process where not overly technical people are put in front of new software and told to use it. This process is recorded and the developers watch the recordings to gain a better understanding of how actual users will use the applications. He described it as similar to watching a videotape of your own presentation.

Some software, he said, will always be proprietary. Tax filing software, with its involvement of attorneys, and massively multiplayer games such as Worlds of Warcraft are examples of this. If we want to achieve world domination, he said, we must at least not be actively hostile to independent software vendors.

Windows, he says, has a huge installed base. We have equivalents to 80 to 90% of Windows­based applications, but there are a lot of niche apps. The Linux Standard Base, he said, is a way to help us achieve a Linux where ISVs are able to produce this software for Linux.

Whither the Linux desktop? The year of the Linux desktop, Ts'o noted, seems to be n+1. But, he said, we are getting better. We are starting to see commercial desktop applications for Linux, such as IBM's Lotes Notes. Laptops are now selling with Linux installed, and we now have a decent office suite.

So what are we missing, he asked. Do we have bling? He moved his mouse around and made his desktop turn around like a cube with Compiz. Do we have ease of use? It's getting better, he said. Raw Linux desktops are pretty much on par with raw Windows desktops for usability, although we have some ways to go to catch up to Mac OS X, Ts'o said.

Do we have a good software ecosystem? We're getting there, he said, but we have that last 20% that is needed.

We have office compatibility now, too. The format is more important than the operating system, he said. If people are unwilling to try Linux, put OpenOffice.org on their Windows machines. With the advent of OOXML, Microsoft's proposed document standard, people will need to change formats anyway. We might as well change them to OpenOffice.org and the OpenDocument Format (ODF). If we win the format war, we can switch the desktops later. And getting people to try OpenOffice.org, he noted, does not require people to remove Microsoft Office.

It has been a great 16 years of Linux, Ts'o said. It's amazing what we've done. There is lots more to be done, he concluded, but nothing is insurmountable.

Local issues

Following Ts'o's keynote address and a lunch break, I attended another interesting session that covered a project from a rural area not far from where I live. Ontario's BGLUG, in the counties of Bruce and Grey around Owen Sound, has gotten together with United Way to distribute donated low­end Linux machines to underprivileged students through an anonymous system with Children's Aid. Brad Rodriguez of BGLUG and Francesca Dobbyn of the United Way gave an hour­long talk discussing the details of how their project got going and its early results.

The short form is that a local government office offered Dobbyn's United Way office a handful of retired but good computers. Some of these were used to replace even older machines in her office, but she contacted the local Linux Users Group about the idea of making the machines available to high school students in need.

Families who are on government assistance must declare the value of all goods they receive, including software, and this value is taken out of their assistance cheques, so it was important, Dobbyn said, to ensure that these machines cost as little as possible and had no sustained costs. The LUG installed Linux on a number of these machines and Dobbyn, through the local Children's Aid society, distributed these machines anonymously to local students in need. They now accept hardware donations and have been keeping this up as a permanent program.

They used Linux, specifically Ubuntu 6.06, Rodriguez said, because it is free as in beer and is not at risk of immediate compromise as soon as it is connected to the Internet if antivirus and other security software isn't maintained, an impossibility when the people setting up the machines cannot contact the people using them.

Four members of the LUG have volunteered to be contacted at any time by these students for help with the machines, which are not powerful enough to run games but are strictly for helping students in need complete homework assignments in the age of typed­only essays and PowerPoint class presentations.

One of the issues schools have faced is that when the students take the work to school, they are putting their disks in Windows computers and using Microsoft Office to print out their assignments. While there is no technical issue, the school board will not allow non­Microsoft software to be installed. Like many school boards, they are underfunded, and Microsoft donates a lot of the computer equipment on the sole condition that non­Microsoft software be barred from these machines.

And the rest

Among the many other topics discussed at Ontario LinuxFest was a completely objective comparison of Microsoft's OOXML document standard and OpenOffice.org's ODF document standard by Gnumeric maintainer Jody Goldberg, who has had to wade through both in depth. His summary is that OOXML is not the spawn of Satan, and ODF is not the epitome of perfection. Both have their strengths and weaknesses, and he sees no reason why we could not go forward with both standards in use.

Ultimately, Ontario LinuxFest was one of the best Linux conferences I attended. Because organizers kept it to one day with two sessions, two BOFs, and the everpresent Linux Professional Institute exam room, there were few times when it was difficult to choose which session to attend. With only a handful of sessions on a wide variety of topics, the signal­to­noise ratio of good sessions to filler was high. I did not find any sessions that were a waste of time.

The only sour note of the conference was that it did not break even. Although between 300 and 350 people attended, organisers literally had to pass a bag around at the end asking for contributions to offset their budget shortfall. In spite of this, I believe Ontario LinuxFest is a conference that is here to stay, and I look forward to OLF 2008.

Originally posted to Linux.com 2007-10-15; reposted here 2019-11-22.

conferences foss 2646 words - permanent link - comments: 0. Posted at 11:17 on October 15, 2007

OLS closes on a keynote

The fourth and final day of the ninth annual Ottawa Linux Symposium wrapped up on Saturday with a few more session and a keynote address by Linux kernel SCSI maintainer James Bottomley.

During the day I attended a few sessions, among them one entitled "Cleaning Up the Linux Desktop Audio Mess" by PulseAudio lead developer Lennart Poettering.

Linux audio is a mess, Poettering asserted. There are too many incompatible sound APIs, with OSS, ALSA, EsounD, aRts, and others vying for the sound device. Each of these systems has limitations, Poettering said. There are also abstraction APIs, but they are not widely accepted, he said. Abstraction layers slow things down while removing functionality from the APIs they are abstracting.

What desktops lack, he said, is a "Compiz for sound." Different applications should be able to have different volumes. Music should stop for VoIP calls. It should be possible for the application in the foreground to be louder than the application in the background in X. Applications should remember which audio device to use. Hot switching of playback streams, for example between music and voice over IP, should be possible, and the sound streams should seamlessly be able to transition between speakers an a USB headset without interruption, he said.

In spite of these missing capabilities and the API mishmash, Poettering said that there are some things that Linux audio does do well ­­ among them, network transparent sound, the range of high­level sound applications, and a low­latency kernel at the core.

The audio mess in Linux, Poettering said, is not a law of nature. Apple's CoreAudio proves this, as does Windows Vista's userspace sound system. In the effort to improve Linux audio, he said, we need to acknowledge that although the drivers may be going away, the Open Sound System (OSS) API is not. It is the most cross­platform of the Linux sound APIs. It is important to remain compatible with the OSS API, he said, but it is necessary to standardise on a single API. Linux audio needs to stop abstracting layers, he said, and it needs to marry together all the existing APIs and retain existing features.

Poettering said PulseAudio, a modular GPL­licensed sound server that is a drop­in replacement for EsounD, offers a solution to the problem. It is a proxy for sound devices that receive audio data from applications and send audio data to applications, he said. It can adjust sample rate, volume, provides filters, and can redirect sound and reroute channels. PulseAudio comes with 34 modules and supports OSS, ALSA, Solaris sound, and Win32. It even supports the LIRC Linux remote control functionality.

PulseAudio is not a competitor for the professional audio package Jack, Poettering said; it can run side by side with it. What PulseAudio is not is a streaming solution, nor a demuxing or decoding system. It is not an effort to try to push another EsounD on people. It is a drop­in replacement for EsounD that will just work, superseding every aspect of EsounD and ALSA dmix in every way.

PulseAudio is included, but not enabled, in most distributions. Now that he works at Red Hat, Poettering says, maybe Fedora 8 will have it enabled by default.

James Bottomley's keynote address

This year's keynote address was delivered by kernel SCSI maintainer James Bottomley, a charismatic Brit known, among other things, for wearing a bow tie. His lively presentation was called "Evolution and Diversity: The Meaning of Freedom and Openness in Linux."

Borrowing a slide from Greg Kroah­Hartman's 2006 keynote, Bottomley showed a picture of David and the flying spaghetti monster with the caption: "Linux is evolution, not intelligent design." Evolution is a process for selection, he said, and diversity is the input to the evolution. Evolution selects the most perfect options from the diversity tree. In nature, evolution results in only one or two or three perfect species from the diversity input. In Linux, Bottomley said, evolution is an adversarial process with the occasional bloodbath of the Linux Kernel Mailing List, patch review, testing, and taste.

Bottomley said oddball architectures like Voyager and PA­RISC create diversity to feed the evolutionary process of Linux. Getting architectures like these included in the kernel requires innovation. There are other constituencies with small communities that make a big difference ­­ accessibility, for example. It is not popularity and number of users that determine what gets into the kernel; anything that is done well can go in.

Evolution and diversity are battling forces, Bottomley said. Innovation is created by their give and take.

Freedom also appears as a result of this give and take. As long as the ecosystem works, you have freedom to think, innovate, and dream. Linux supports any device, large or small.

Openness is not like freedom, Bottomley said. Openness is a fundamental input to the evolutionary process, while freedom appears as a result of the process. Unless you show the code, no one can review it, and it cannot be debugged and stabilised.

If you are not testing Linux kernel ­rc1, you have less right to complain about the kernel, he said. Don't concentrate on distribution kernels, concentrate on upstream kernels. If you test for bugs upstream, fewer bugs will flow to the distros. Wouldn't it be nice, he asked, to find the bugs before they get into distribution kernels?

Maintainers are arbiters of taste and coding style. They are the guarantors of the evolutionary process.

They have the job of applying the process to get people to come forward and innovate.

Diversity itself acts as an evolutionary pressure, he said. Bit rot, an old but heavily used term, is the equivalent of mold, sweeping up dead things. Bit rot ensures old code is dead and gone. Bit rot is why we will never have a stable API in Linux, Bottomley said.

A lot of people are afraid of forking, Bottomley said, but the ­mm tree is technically a fork of the Linux kernel. He posted a slide quoting Sun CEO Jonathan Schwartz: "They like to paint the battle as Sun vs. the community, and it's not. Companies compete, communities simply fracture." The quote was part of an argument on the Linux Kernel Mailing List (LKML) earlier this year.

What does it mean that companies compete, asked Bottomley, posting a slide of the names HP­UX, AIX, SYSV, SunOS, and MP­RAS overlayed over a nuclear explosion. The battle of the Unixes is what it means, he said, and it left a lot of corpses and wounded a lot of customers. He warned that Schwartz is trying to portray Linux as an inevitable return to the Unix wars.

The forking that we do and the fragmentation that we do in Linux, he said, is necessary for our ecosystem.

Forking provides the energy for our evolutionary process. It is a hard idea to get your mind around, a paradigm shift, he said. No project is open source unless it is prone to forking, he stated. Go look at Solaris code, he challenged, and see how you can fork it. No one owns Linux, but all of the thousands of people who have contributed own a piece of the Linux kernel. The freedom to think, to experiment, and to fork is what drives the community. Sun is engaged in a FUD campaign to link Linux to fragmentation.

Rather than fighting this we must embrace this message, he said. Openness and innovation force forks to merge. The combination of these forks is usually better than either fork. Nature creates lots of forks, he noted. Evolution is wasteful, but Linux does this in a useful way.

We must increase the pace of the "innovation stream," Bottomley said. The process is getting faster. The increasing number of lines modified per release is not a problem, it is in fact a good thing. Increasing the pace of change must increase the evolutionary pressure.

One of the problems facing Linux, he noted, is that it is written in English, which limits the non­English­ speaking majority of the world from contributing to the Linux kernel. We need to come up with a way of accepting patches in foreign languages, he said.

No talk about Linux is complete, Bottomley said, without a discussion of closed source drivers. When you produce a closed source driver, you cut yourself off from the community and from the evolutionary process. Bit rot is powerful against you and you are in a constant race to keep up with the kernel. Closed source drivers waste the talent of your engineers and waste money. They aren't immoral or illegal, just "bloody stupid," he said.

We often preach to the converted, he warned. Engineers at companies providing closed source drivers generally support open aims. It is the management and lawyers that have the problems. They see the code as intellectual property, and property needs defending. Encouragement is not brought about by flaming the engineers on the LKML, he warned. You have to go after the executives and the legal arm. Go to the Linux Foundation if need be, you can tell the companies. The Linux Foundation has an NDA program to produce fully open source drivers from NDA specifications. Saying he wasn't sure if it had been announced, but that if not he hereby announced it, Adaptec will be the first company to use the NDA program to get open source drivers.

The purpose of the NDA program, he said, is for companies whose specs themselves are fine for releasing, but whose document margins are full of comments from the engineers that could be construed as slanderous. Hewlett­Packard, for instance, once gave out some PA­RISC documentation with doodles containing blasts against the competition. Lawyers had to redact the doodles before it could be released. The NDA program can get around the issue of dirty little secrets like this, he explained. All we want, he said, is the driver.

To wrap up, he said, evolution and diversity put tension in the system until freedom is created in the middle. Forking is good. Whatever we are doing, we need to keep doing. Wrap­up Following the keynote, the OLS organisers provided entertaining announcements and gave away prizes.

Aside from the routine announcements about the functioning of the conference, organiser Craig Ross said that this, the ninth OLS, is the first at which the organisers had not heard from the main hotel where attendees were staying, the police, or the city of Ottawa about any of the attendees.

The ninth Ottawa Linux Symposium demonstrated Andrew J. Hutton and C. Craig Ross's professionalism at putting on the conference yet again. The well­oiled organisation team kept everything flowing smoothly and more or less on time, with nearly all talks recorded on high definition video, and WPA­protected wireless available throughout the conference centre. Just a couple of days before this year's Ottawa Linux Symposium, I attended the annual Debian Conference (DebConf) in Edinburgh, Scotland. I saw fewer than five people in common between the two conferences ­­ a testament to the diversity and number of people involved in the Linux community.

Originally posted on Linux.com 2007-07-02; reposted here 2019-11-23.

conferences foss 1851 words - permanent link - comments: 0. Posted at 14:21 on July 02, 2007

Thin clients and OLPC at OLS day three

The third day of the Ottawa Linux Symposium (OLS) featured Jon 'maddog' Hall talking about his dreams for the spread of the Linux Terminal Server Project (LTSP) throughout the third world as an inexpensive, environmentally friendly way of helping get another billion people on the Internet, along with an update on the One Laptop Per Child (OLPC) project, and several other talks.

Hall spoke in the afternoon at a very well attended session entitled "Thin clients/phat results ­ are we there yet?" Hall started with his trademark trademark disclaimer where he advised that his lawyers tell him he must remind his audience that Linux is a trademark of Linus Torvalds, among other trademark warnings.

Hall says he has been in computers since 1969, from the era when computers were huge, expensive mainframes. Programmers picked up the habit of doing their work in the middle of the night at this time, he commented at least somewhat jokingly because the middle of the night was the only time students could gain access to the mainframes as the professors were not using them at that hour.

Eventually, minicomputers and timesharing were born. Computers had operators and terminal users could rely on them to handle regular backups and restorations when they made mistakes. Sometimes timeshare computers had too many users on and you could sense this as your five key­presses that you typed would return to you all together after a few seconds, Hall commented.

Finally the personal computer came along, he says. These computers, unlike their predecessors, spent most of their time idling and burning up a lot of electricity and making a lot of noise. He says that in France, there are noise level enforcers who monitor workspace noise with gauges to enforce noise limits.

PCs take up a lot of space, he continued. They dominate desks leaving little room for any other work. They are very inefficient, he says, each requiring its own memory and disk space. And they become obsolete quickly. But this talk is about thin clients, he says, not PCs.

Not to get to his point too quickly, he paused to take a few more swipes at PCs noting that desktop security is very poor, citing an example where a government employee was found bringing classified data home on her USB key to work on next to her drug dealing boyfriend. Cleaning crews, too, Hall says can be a threat to desktop security.

"What is the real problem?" Hall asked, showing a photo of his parents. His father, he says, was an airplane mechanic who took apart the family car engine and put it back together multiple times without any missing or leftover parts, without instructions. Mom & pop, he says, don't want to spend time compiling kernels, they expect their computers to just work.

There is a complex electrical appliance that mom & pop can use that just works, Hall says. It is called the telephone. The telephone network is not trivial, requiring highly skilled people to maintain it, but it is all hidden from the end user.

The Linux Terminal Server Project (LTSP) thin clients with free software, Hall says, are smaller, thinner, cheaper, lower powered, and easier to use than their PC counter parts. LTSP servers need heavy power, but are a single point of work and maintenance for users and administrators. LTSP is a hit around the world, Hall says. Old systems such as 486s can be reused as terminals. Hall cited the example of a school that used a number of donated obsolete computers and a donated server using LTSP on no budget to provide their school with a usable computer system.

Terminals generally have no local storage, Hall says, and can be easily turned off at night, and they don't lend to software piracy by their nature. He used the opportunity to go on a tangent about how he agrees software piracy is bad. Software authors have every right to say how they would like to see their software used, he says. Users are free to use or not use that software.

On the topic of piracy, he discussed how Brazil how distributed a large number of very cheap Linux computers. Within not too long, a major software corporation complained to the president of Brazil that the people were buying the computers and at a rate of around 75% were replacing Linux with pirated copies of another operating system. When asked by the government what to make of this, Hall replied that this is progress, as it is down from the national 84% piracy rate in the country.

Hall says that the piracy rate in the US is estimated at 34%, while it is 96% in Vietnam. People in Vietnam, he pointed out, make $2 or $3 per day and should not be expected to pay the $300 for a shiny CD which they know only costs $2 to get down the street.

The third world needs to be on the Internet, Hall says, but it can't afford to do it with proprietary software.

But using LTSP they could.

Hall says that global warming is a major and significant issue lately. Imagine, he says, if one billion more people brought desktops on line each using 400 to 500 watts of electricity. Much of the power used by a computer is turned to heat, he says, further adding to the cost in poorer areas by increasing the cost of air conditioning to make up for the extra heat.

Still on the issue of the environment, he says that in his home town in New Hampshire, he used to drive down to the local dump, and later he drove up to the local dump, and eventually it was closed because it was so full that water run­off was starting to affect the drinking water supply for his town. Thin clients are very small, he says, able to fit in the back of a monitor. Thin clients have no moving parts, no noise, and provide a good lifespan ­­ so they don't end up in landfills quite so quickly.

LTSP should be used to create a new open business model, he says. Allow people to become entrepreneurs with LTSP. Train people to provide LTSP services, he continued, reminiscing on the early days of ISPs before the big companies had the Internet figured out. Local ISPs used to be small local businesses where the clients could actually talk to the operators of the business in a meaningful way, but these were eventually bought up by larger companies and the service deteriorated to the point of becoming similar in level to that of large software companies. LTSB­based net cafes could exist under this model, he suggested.

In South America, Hall says, 80% of people live in urban environments. Basic services are very expensive when they're available at all. An income of $1,000 a month is considered very good in most cases. One hundred clients would provide this level of income to an LTSP entrepreneur, Hall says, and could provide phone or Internet radio services.

Hall says his goal is to have 150 million thin clients in Brazil, requiring between one and two million servers and creating about two million new high tech jobs in the country. It would create a local support infrastructure and could realistically be done with private money, he says. It would create useful high tech jobs and on site support by entrepreneurs.

Linux on Mobile Phones

Another of the sessions of the third day was one entitled the Linux Mobile Phone Birds of a Feather (BoF) by Scott E. Preece of Motorola Mobile Devices and the CE Linux Forum. Preece started by introducing himself and his subject, noting that around 204 million handhelds with Linux are expected to be sold in the year 2012. Motorola expects much of its handheld lineup to run Linux, he says.

Linux is a good platform for experimentation, and by its nature provides good access to talent. Lots of people want to learn Linux, while not as many are interested in learning to develop for Symbian or Windows. Linux, he says, is a solid technology. It can be configured for small systems while retaining large system capabilities. It can be modified to suit needs as appropriate.

Companies using Linux for handhelds have been banding together lately to form a number of collaborative initiatives. There are four major ones, he says, listing the Linux Foundation, the Consumer Electronic Linux Forum, the Linux Phone Standards Forum, and the Linux in Mobile Foundation as well as two open source projects working on the topic.

One of these is GNOME which has a community effort to address mobile phones, and the other is OpenMoko, which aims to have a completely free mobile phone stack except for the GSM stack which is hard­coded into a chip. The OpenMoko project, Preece says, is a community style, code­centric, project under a corporate structure.

Preece described the various foundations at some length before tension began to build in the room over his employer's apparent GPL violations. Motorola, says several members of the audience, has been releasing Linux­based handheld devices already, but in violation of the GPL, has not been providing access to the source code for these devices.

David Schlesinger of Access Linux says that his company would be releasing all the code for its phones no later than the release of the phones themselves, to much enthusiasm.

Preece promised to once again push his company to take the GPL complaints seriously and try to address them.

One Laptop Per Child

The last session I attended on the third day was a BOF about the One Laptop Per Child (OLPC) project presented by OLPC volunteer Andrew Clunis.

"It's an education project, not a laptop project", Clunis began, quoting OLPC founder Nicholas Negroponte. A high quality education is the key to growing a healthy society, he continued, and an inexpensive laptop computer for every child in the world is a good way of doing it.

Children learn by doing. Until they are about five, a child only learns what they are interested in learning, at the end of which you go to kindergarten and enter the instruction/homework cycle of modern education, Clunis says. The OLPC project helps kids learn by doing and by interacting.

Collaboration is paramount. If our network connections go down, he says, our laptops become warm bricks. OLPC laptops take this to heart by including networkability of applications directly in the human interface specification.

Andrew Clunis

Everything, Clunis says, from the firmware stack to the applications are free software. It needs to be malleable, as he put it. OLPC is not interested in the consumer laptops of the west.

The laptops themselves depend on as little hardware as possible. They use 802.11s ESS mesh networking for their connectivity. Mesh networking allows each laptop to relay data for other laptops to reach the access point even if they are out of range of it directly. The access point itself, for its part, Clunis says will likely be connected to the Internet by satellite in most cases.

The recharge mechanism for the laptop can use pretty much anything that generates power, although the prototype's crank has been replaced with a pull chord to use the more powerful upper arm muscles instead of the relatively weak wrist muscles, Clunis says. As a result of this means of keeping the battery charged, power management is very important. He described the power usage of the laptops as an "order of magnitude" lower than the 20 or 30 watts typically used by modern laptops.

The laptop is powered by a 466MHz AMD Geode LX­700 processor with 256MB of RAM, a 1GB flash drive on jffs2 with compression to bring its capacity to around 2GB, a specialized LCD, "CaFE" ASIC for greater NAND access, a camera and an SD card slot. The laptop itself spots several USB ports and jacks, and has no mechanically moving parts, although the laptop has wireless antennas that flip up and a monitor that swivels on its base.

An OLPC laptop was circulated through the audience at the session, often getting caught for extended periods at individuals fascinated by the tiny, dinner­plate­sized machine.

A member of the audience indicated that the first shipment of OLPC laptops is due to be shipped this fall.

1.2 million OLPC laptops are expected to go to Libya at that time.

Clunis explained that the laptop frames are color coded by order to help track black market and theft. The laptops themselves are designed to be relatively theft resistant by not being useful outside of range of its parent access point and by having some form of key required to use the machine.

There was a good deal of cynicism in the room about the value of these laptops in the many parts of the world where the children who would be receiving them are far too hungry to really appreciate the value of the education from them beyond what food their parents may be able to purchase by selling the machines.

Similarly some people present felt that the laptops would be difficult for their owners to hold on to without being stolen in many parts of the poorest countries where owning things is difficult due to high crime rates. Clunis did not have a clear answer to these concerns.

The third day of OLS wrapped up with my laptop taking a tumble down the stairs between the second and third floors of the venue, though the eight­year­old Dell doesn't seem to be any worse for wear, save for a broken clip. The next day of the conference is the last, with kernel SCSI maintainer James Bottomley's keynote to wrap up the conference.

Originally posted on Linux.com 2007-06-30; reposted here 2019-11-23.

conferences foss 2306 words - permanent link - comments: 0. Posted at 14:21 on June 30, 2007

Kernel and filesystem talks at OLS day two

Greg Kroah­Hartman kicked off the second day of the 9th annual Ottawa Linux Symposium with a talk entitled "Linux Kernel Development ­ How, What, How fast, and Who?" to a solidly packed main room with an audience of more than 400 people.

Kroah­Hartman set up a large poster along the back wall of the session room with a relational chart showing the links between developers and patch reviewers for the current development kernel, along with an invitation for all those present whose names are on the chart to sign it.

He launched into his talk with a bubble chart of the development hierarchy as it is meant to work, showing a layer of kernel developers at the bottom who submit their patches to about 600 driver and file maintainers, who in turn submit the patches to subsystem maintainers, who in turn submit them to Andrew Morton or sometimes Linus Torvalds directly.

Andrew Morton was selected to maintain the stable tree while Linus worked on the development tree, Kroah­Hartman says, and explained how Morton merges all unstable patches into a tree called the ­mm tree, which stands for memory management, Morton's historical subtree and Linus actually maintains the stable kernel tree. While they will never admit it, says Kroah­Hartman, they now work effectively the opposite of how they intended.

Patches are submitted up the previously mentioned hierarchy and each person who reviews the patch and sends it up puts their name in the signed­off­by field of the patch. The large chart Kroah­Hartman printed and posted is made up of these signed­off­by field relations and shows that the actual relationships between the layers of developers does not operate quite the way the bubble chart he showed suggests.

A stable release of the kernel comes out approximately every 2.5 months, Kroah­ Hartman says. Over the 2.5 years since the 2.6.11 kernel, changes to the kernel have taken place at an average rate of 2.89 patches per hour, sustained for the entire 2 and a half years. Kernel 2.6.19 alone sustained a rate of change of 4 changes per hour, Kroah­Hartman showed on a graph.

In kernel 2.6.21 there are 8.2 million lines of code. 2,000 lines are added per day, with 2,800 lines modified each day, every day, on average, Kroah­Hartman says. He noted that the actual number is a matter of debate, but believes his numbers to be a relatively accurate reflection of reality.

Asked how these numbers compared to Windows Vista, Kroah­Hartman says that the Vista kernel is smaller, but does not contain any drivers so the comparison is not possible to do accurately. Kroah­ Hartman was also asked if each dot release was the equivalent of the 2.4 to 2.6 kernel changes. No, he responded, but it takes around six months to do an equivalent amount of change.

Returning to his presentation after a flurry of questions, Kroah­Hartman compared the various parts of the kernel and noted that the 'arch', or architecture, tree is the largest part of the kernel by number of files, but that the drivers section of the kernel makes up 52% of the overall size, with the comparison set being core/drivers/arch/net/fs/misc. All six sections seem to be changing at roughly the same rate, he noted. Linux is more scalable than any other operating system ever, Kroah­Hartman boasted. Linux can run off everything from a USB stick to 75% of the top world supercomputers.

Kroah­Hartman went on to explore the number of developers who contribute to each release of the kernel and found that there were 475 developers contributing to the 2.6.11 kernel, over 800 in the 2.6.21 kernel and 920 in the not yet released 2.6.22 kernel so far. He noted the number could be off a bit as kernel developers seem to have a habit of misspelling their own names, although he says he did his best to correct for that.

The kernel development community is growing rapidly, Kroah­Hartman says. In the initial 2.6 kernel tree there were only 700 developers. In the last two and half years from 2.6.11 to 2.6.22­rc5, around 3,200 people have contributed patches to the kernel. Half of contributors have contributed one patch, a quarter have contributed two, an eighth have contributed three, and so forth, Kroah­Hartman noted as an interesting statistic.

By quantity, Kroah­Hartman says, and not addressing quality, the biggest contributors of patches to the Linux kernel are Al Viro in first place with 1,339 patches in the 2.5 year window, David S. Miller with 1,279, Adrian Bunk with 1,150, and Andrew Morton with 1,071. He listed the top 10 and indicated that a more extensive list is available in his white­paper on the topic. The top 30% of the kernel developers do around 30% of the work, he says, representing a large improvement from 2.5 years ago when the top 20% of kernel developers did approximately 80% of the work.

With statistics flying, Kroah­Hartman listed the top few developers by how many patches they had signed off on rather than written. First was Linus Torvalds at 19,890, followed by Andrew Morton at 18,622, David S. Miller at 6,097, Greg Kroah­Hartman himself at 4,046, Jeff Garzik at 3,383, and Saturday's OLS keynote speaker James Bottomley in 9th place at 2,048. Sometimes, Kroah­Hartman admitted, it feels like all he does is read patches.

Next, he talked about the companies funding kernel development. Measured by number of patches contributed by people known to be employees of companies and without accounting for people who changed companies during the data period, Red Hat came in second place with 11.8%, Novell in third at 9.7%, Linux Foundation in fourth at 8.1%, IBM in fifth at 7.9%, Intel in sixth at 4.3%, SGI in eight at 2.2%, MIPS in ninth at 1.5%, and HP in the number ten spot at 1.3%.

The keen­eyed observer will note that he omitted first and third place in this list initially. The seventh place was people known to be amateurs working on their own, including students, at 3.9%, and in first place were people whose affiliations were unknown who had contributed fewer than ten patches each making up 33.2% of total kernel development in the 2.5 year period.

Kroah­Hartman made the point that if your company is not showing up in the list of contributors and you are using Linux, you must either be happy with the way things are going with Linux or you must get involved in the process. If you don't want to do your own kernel contributions, Kroah­Hartman says, you could do what AMD is doing and subcontract his employer, Novell, or another distribution provider, or for less cost contract a private consultant to make needed contributions to the kernel for your company's needs.

Kroah­Hartman's talk, as his talks always are, was entertaining and lively in a way difficult to portray in a summary of his content.

The new Ext4 filesystem: current and future plans

The next talk was held by Avantika Mathur of IBM's Linux technology centre on the topic of the Ext4 filesystem and its current and future plans. Why Ext4, Mathur asked at the outset. The current standard Linux filesystem, Ext3, has a severe limitation with its 16 terabyte filesystem size limit. Between that and some performance issues, it was decided to to branch into Ext4.

Mathur asked why not XFS or an entirely new filesystem? Largely, she explained, because of the large existing Ext3 community. They would be able to maintain backward compatibility and upgrade from Ext3 to Ext4 without the lengthly backup/restore process generally required to change filesystems. The XFS codebase, she says, is larger than Ext3's. A smaller codebase would be better.

The Ext4 filesystem, available as Ext4­dev starting in Linux kernel 2.6.19, is an Ext3 filesystem clone with 64 bit JBDs, which I am not entirely sure what are. The goals of the Ext4 project, she explained, are to improve scalability, fsck (filesystem check) times, performance, and reliability, while retaining backward compatibility.

The filesystem now supports a max filesystem size of one exabyte. That is to say, Ext4 can hold 260 bytes with 48­bit block numbers times 4KB per block, or 1,152,921,504,606,846,976 bytes. Mathur predicted this should last around five to ten years, after which time the support for filesystems actually that large can be seen and a move to 64 bit filesystems can be attempted, and should be possible without having to go to Ext5 based on Ext4's design.

Ext3 uses indirect block mapping, Mathur says, while Ext4 uses extents. Extents, she explained, use one address for a contiguous range of blocks. One extent can assign up to 20,000 blocks to the same file, and each inode body can hold four extents.

Mathur went on to describe features in or soon to be in the Ext4 patch tree. Ext4 will feature persistent pre­allocation, where space is guaranteed on the disk in advance, with files being allocated space without the need to zero out the data. Using flags, Mathur explained, these extended allocations will be flagged as initialized or uninitialized. Uninitialized blocks, if read, will be returned as zeros by the filesystem driver.

Also in Ext4, reported Mathur, is delayed allocation. Rather than allocating space at the time a file is written to buffers, it is allocated at the time the buffers are flushed to disk. As a result, files are able to be kept more contiguous on disk and short­lived or temporary files may never be allocated any physical disk space. Ext4 also sports a multiple block allocator which can allocate an entire extent at once. Also, Mathur says, an on­line Ext4 defragmenter is in the works.

Filesystem checks are a concern with one exabyte filesystems, noted Mathur, for their speed. Using current fsck, it would take 119 years to check the entire filesystem. The version of fsck for Ext4 will not check unallocated inodes, Mathur says, among other improvements. She showed a performance chart showing significant performance improvements in e4fsck over its predecessors.

Ext4, has a number of scalability improvements over Ext3, Mathur says, raising the maximum filesize from two terabytes to 16 terabytes, with the file size limit being left there as a performance versus size tradeoff. Ext4 has 256 byte inode entries, up from 128 in Ext3, and Ext4 introduces nanosecond timestamps instead of the second timestamps in Ext3. Mathur explained that this is because with the speed of operations now, files can be modified repeatedly in a single second. Also, she says, Ext4 introduces a 64­bit inode version numbers for the benefit of NFS which makes use of it.

Mathur wrapped up her presentation with a thanks to the 19 people contributing to the Ext4 project.

The evening of the this second day of OLS saw the second of two corporate receptions with IBM putting on a spread and giving away prizes while talking about its Power6 processors in one of the conference rooms.

The Power6 processor, demonstrated briefly by Anton Blanchard, contains 16 4.7GHz cores. He demonstrated a benchmarking tool that compiles the Linux kernel 10 times for effect. It accomplished this in just under 20 seconds.

Of particular note, Blanchard reported that IBM uses Linux to test and debug the Power6 chips. Following this, the hosts of the reception gave away several PS/3s, some Freescale motherboards, and some other smaller bits of swag as door prizes. Thus ended the second day of four days of OLS 2007.

Originally posted on Linux.com 2007-07-29; reposted here 2019-11-23.

conferences foss 1912 words - permanent link - comments: 0. Posted at 20:13 on June 29, 2007

Day one at the Ottawa Linux Symposium

The opening day of the 9th annual Ottawa Linux Symposium (OLS) began with Jonathan Corbet, of Linux Weekly News and his now familiar annual Linux Kernel Report, and wrapped up with a reception put on by Intel where they displayed hardware prototypes for upcoming products.

The Kernel Report

Corbet's opening keynote began with a very brief history of Linux, showing the kernel release cycle since it was started in 1991. He made the point that the kernel has gone from a significant release every couple of years to one every couple of months over the last few years, with every point release of the kernel being a major release. Today, every point release has new features and API changes.

The release cycle today is very predictable, says Corbet, with kernel 2.6.22 anticipated in July and 2.6.23 expected around October. The cycle starts with a kernel, say 2.6.22­rc1, then a second release candidate is made available, and a third (if necessary), and so on until the release candidate becomes stable, and work begins on the next kernel.

Each kernel release cycle has a 2 week merge period at the start where new features and changes are introduced culminating in the release of ­rc1, or release candidate 1, followed by an 8­12 week period of stabilization of the kernel with its new features. At the end of this process the new stable version of the kernel is released.

This release cycle system, explained Corbet, was introduced with the 2.6.12 kernel and the discipline within the kernel development community was established within a few releases. He demonstrated this with a graph showing the number of cumulative lines changed in the kernel over time and kernel version numbers which clearly showed a linear pattern of line changes evolving into a kind of stair case of kernel line change rates.

Since kernel 2.6.17 from June of 2006, Corbet reported, two million lines of kernel code have changed with the help of 2,100 developers in 30,100 change sets.

This release process, Corbet says, moves changes quickly out to the users. Where it used to take up to two or three years for new features to be introduced to the stable kernel, it can now take just a few months. This also allows Linux distributors to keep their distributions closer to the mainline kernel. Under the old kernel development model, Corbet continued, some distributions' kernels included as many as 2000 patches against the mainline kernel. With the rapid release cycle, distributions no longer have any significant need to diverge from the mainline kernel.

But it's not all perfect. Corbet noted that among the things that is not working well is bug tracking, regression testing, documentation, and fixing difficult bugs. Some bugs require the right hardware in the right conditions at the right phase of the moon to solve, he commented. As a result, kernels are released with known bugs still in place.

What's being done to address this? Better bug tracking, stabilization (debugging)­only kernel releases, and automatic testing are a few of the tactics Corbet listed. Things in the kernel are getting better over all, he says.

Corbet continued with his predictions for the future of the kernel with the disclaimer that these are only his opinions of what is to come. First, Corbet says the soon­to­be released 2.6.22 kernel will include a new mac80211 wireless stack, UBI flash­aware volume management, IVTV video tuner drivers, a new CFQ/IO stack, a new FireWire stack, eventfd() system calls, and a SLUB allocator.

On the future of scalability, Corbet says that today's supercomputer is tomorrow's laptop. Linux' 512 processor support works well, he says, but the 4096 processor support still needs some work. The other side of scalability, such as operating on cell phones, Corbet noted, is less well represented in the kernel than its super­computing counterparts.

New filesystems

Corbet says that filesystems are getting bigger, but they are not getting any faster. As drives and filesystems continue to expand, the total time needed to read an entire disk continues to go up. Most filesystems currently are reworks of 1980s Unix filesystems, he went on, and may need redoing.

Among the changes he predicted in the land of the filesystems is a smarter fsck that only scans the parts of the drive that were in use. Corbet says that a new filesystem that just came out in the last few weeks called btrfs is extents based, supports sub­volumes and snapshotting, checksums, and allows on­line fsck.

Off­line fsck is very fast in it by design, says Colbert, noting that btrfs is far from stable.

The Ext4 filesystem, the successor to the current Linux Ext3 filesystem, should be coming out soon, says Corbet. It will feature the removal of the 16TB file filesystem limit with 48 bit blocks, extents, nanosecond timestamps, pre­allocation, and checksummed journals.

The Reiser4 filesystem, the successor to ReiserFS, has stalled, says Corbet, with Hans Reiser no longer able to work on it. To move forward, Corbet says that Reiser4 needs a new champion.

The last filesystem Corbet mentioned is LogFS, a flash­oriented filesystem with n­media directory trees.

Virtualisation is becoming more than just a way to get money from venture capitalists, joked Corbet. Xen is getting commercial development and may finally end up in the mainline kernel tree in 2.6.23. He also mentioned Lguest and KVM, the latter a full Virtualisation system with hardware support and working live migration that was merged into the kernel in 2.6.20 although it is still stabilizing.

Corbet also brought up containers, a lightweight virtualisation system in which all guests share one kernel.

Several projects are working on this, he says, but noted that they must all work together as multiple container APIs would not work.

Next, Corbet predicted new changes to CPU scheduling, a problem once believed solved, with the introduction of the Completely Fair Scheduler (CFS), which, as Corbet put it, dumps complex heuristics and allocates CPU time in a very simple fairness algorithm. He says he expects this possibly in kernel 2.6.23.

Corbet also says he foresees progress with threadlets, or asynchronous system calls. If a system call blocks, the process will continue on a new thread and pick up the results from the blocked system call later. Among the other places he predicted change was in the realm of power management, video drivers, and tracing, with the help of soon­to­come utrace, an in­kernel tracer.

Last but certainly not least, Corbet says that the Linux kernel is not likely to go to version 3 of the GPL owing to the fact that the entire Linux kernel is explicitly licensed under version 2 of the GPL ­­ so even if the will develops to relicense it, it would be difficult.

Piled higher and deeper

The next talk I went to was by Josef Sipek entitled "Kernel Support for Stackable File Systems." Briefly, Sipek explained what stackable filesystems are and how they work in some rather extensive detail and discussed improvements on the way.

They're called stackable filesystems because several of these can be stacked together on top of one actual filesystem to give the filesystem additional functionality.

A stackable filesystem is a virtual filesystem layer that wraps around a filesystem. Sipek listed a number of examples with largely self­explanatory names of stackable filesystems. Among them, ecryptfs, a stacked filesystem layer that encrypts data on its way to the actual filesystem, gzipfs, a filesystem that compresses data, unionfs, a filesystem that combines multiple filesystems, replayfs, a stacked filesystem that can replace calls to the actual file system to assist with debugging, and a number of others.

Intel's reception

At the end of the day, Intel held its annual reception in an upstairs room with free food and alcohol. The company used to provide speakers during this event as well, but as they did last year, this year an Intel employee rose to announce that there would be no speeches, just a technology demonstration in the corner of some of the company's latest. He thanked those at OLS for their continued important work and the notion of free alcohol with no speakers washed over well with enthusiastic applause.

At their technology desk I found an interesting looking device designated the ZI9, a soon­to­be announced prototype of a mobile Internet device measuring a bit over 4 inches across by a bit over 6 inches long. By the time I got to it the battery was dead, but it looked like a cross between a Blackberry, and a tablet laptop.

The device had a miniature keyboard and a large monitor on a swivel to cover the keyboard or operate as a useful monitor, as well as a small camera on a skewer across the top of the device. This device is apparently designed to run Linux.

OLS continues through Saturday, June 30th. We'll have additional reports throughout the week.

Originally posted to Linux.com 2007-06-28; reposted here 2019-11-23.

conferences foss 1488 words - permanent link - comments: 0. Posted at 20:18 on June 28, 2007

DebConf 7 positions Debian for the future

At last week's DebConf 7 Debian Conference in Edinburgh, Scotland, nearly 400 attendees had a chance to meet and socialise after years of working together online. They attended more than 100 talks and events, ranging from an update by the current and former Debian Project Leaders to a group trip to the Isle of Bute, off the opposite coast of the country.

Throughout the conference, socialising Debian developers could be heard discussing the finer points of programs they maintain and use and ways things could be improved.

Sometimes a laptop would open and something would get fixed on the spot.

The conference itself was held at the Teviot Row House in Edinburgh, just a few minutes south of the downtown core of the historic city. The venue had several presentation and discussion rooms, and all rooms were equipped with video cameras for recording the proceedings.

The conference opened with a welcome from the organising team and an update from the former and current Debian Project Leaders (DPL).

Former DPL Anthony Towns of Australia opened with a recollection of his term as DPL. Towns noted that the first month of a DPL's term is taken up with media interviews asking questions like "what's it like to be DPL?" before he has the answers. After that, the DPL's role is mainly dispute resolution and money allocation.

Towns introduced Sam Hocevar of France, the new DPL, who has been in the role only a couple of months. Hocevar got into what it is he wants to see for the future of Debian, saying that he would like to see a sexier more efficient distribution that integrates all the desktop components.

Debian's quality assurance team should focus not only on license compliance, Hocevar continued, but on the quality of software. Every package, for example, should have proper man pages. And Debian should be aiming for faster release cycles.

On the topic of efficiency he referred to his DPL election platform. Don't rely on Debian's teams to do what you can do yourself, he asked. Citing Wikipedia's policy, he said "be bold." If something needs doing, just do it.

He also asked that each developer set up his work within Debian so that if he gets hit by a comet or gets married his work can go on. Even a relatively short period of inactivity when the goal is a faster release cycle can have a big impact, he warned.

Hocevar asked developers to take back some of Ubuntu and other distributions' work. Don't ignore their work just because you think Debian is better, he said.

We need better communication, he went on. Use debian­devel­announce mailing list for things you do. Put everything you do in a public place for future use and reference. Look at patches implemented in other distributions and use them where applicable, improving Debian and Free Software at large together.

In the Q&A session that followed, Hocevar noted that technical expertise in package maintenance should not be the primary requirement for becoming a Debian developer, citing translators as an import development role in which package maintenance itself is not the critical aspect.

Near the end of the lengthly Q&A session he was asked what he thought about governance reforms in Debian. Noting that he has not felt overwhelmed as the DPL, he indicated that he did not see a need for such things as a DPL team but reserved the right to change his mind in the future.

Evolving Debian's Governance The topic of governance, governing committees, and conformance with Debian's constitution came up several times over the course of the conference. The next talk on the topic was by Andi Barth, entitled "Evolving Debian's Governance."

Barth started out by noting that while Debian's constitution should be central, it is not, yet the system does actually work. Any improvements made then need to be done cautiously to ensure they actually improve things.

He summarised a bit how the governing structure of Debian currently works, describing Debian's famous flame wars as a form of governance. General Resolutions ­­ where all Debian Developers are invited to vote on a particular issue ­­ are long and painful and not often used. There is a Technical Committee responsible for technical policy and technical decisions where two developers overlap. The DPL has the power to delegate power, and the Quality Assurance team has the power to forcefully upload packages.

So what improvements are needed, he asked. Does Debian need a DPL team? A Social Committee, which would itself become the topic of its own talk some days later in the conference, or perhaps a reform of the DPL delegates system?

In the discussion that ensued it was suggested that Debian should not fire developers over technical violations of the constitution where good work is still being done.

Another commenter stated unequivocally that the problem with Debian is that some infrastructure teams cannot be overridden. They are not elected or accountable and can make final decisions to which Barth responded that on the other hand the people doing their kind of work should not be arbitrarily overridden.

The constitution exists for that reason, to not give the DPL the ability to fire people over the colour of their hair.

Sam Hocevar, the DPL, chimed in that there is no sense in risking the participation of important contributors to the Debian project at large by firing them from a specific role.

Another participant noted that when people are causing trouble only due to lack of time to contribute, they should be eased out and replaced rather than engaged in a flame war by others. It would be helpful, said the same commenter, if Debian experimented with telling people they have been "hit by a bus" to see how the project can cope without specific people.

An SPI first

During DebConf 7, Software in the Public Interest, the legal umbrella behind Debian and several other projects in the US, held a meeting of its Board of Directors in person, a first for the board with members from four countries.

The DPL noted that in person meetings tend to be beneficial and suggested that if Debian Developers need to get together to resolve something that they should contact him. Debian has the money should developers need to meet.

Still on the topic of governance, Andreas Tille hosted a discussion session during the conference entitled "Leading a Free Software project" in which he sought to answer three basic questions: What to lead? Who to lead? How to lead?

At the core, he asked, what is the motivation to work on Free Software? Getting something to work for yourself, he answered himself. There is no ready solution to work on a certain task, so you start coding. If you release the code as Free Software you can pick up some colleagues who have the same or similar needs, and then you start splitting the work and improving the code. Releasing Free Software, he said, is a clever thing to do. Releasing code is a way to make friends.

His example for Free Software project leadership is the Debian­Med distribution, a Debian­based distribution for the bio­medical community. He became leader, he said, by issuing an announcement of the project. If you want to avoid becoming a project leader, he cautioned, don't be the one to announce the project.

If you take care of the infrastructure for the project like the Web page and the mailing list you are continuing on the path to becoming the project leader by default, he cautioned; you have done some work, so now you are the leader. If you try to do reasonable things to bring the project forward, people tend to draw you into a leadership position. He called this type of leadership a "Do­ocracy" ­­ he who does, rules.

Who to lead? Free Software developers ­­ individualists, he said. They just behave differently from normal computer users. They want to dive deeper into the project and not necessarily accept what others present. Developers, he warned, often refuse to accept leadership; just look at their T­shirts. They do, however, accept leadership from people they respect and who they would normally otherwise agree with. To that end, if you do not have a technical background, you will simply not be accepted as a Free Software leader, he warned.

On effecting decisions, Tille noted that you have no handle to force your developers. If you force people they will simply go away. There is no other motivation than working on their own project. If people are forced to do things they do not want to do, they will simply leave the project. The relationship is different from the employer­employee relationship.

There is also the risk of projects forking, he commented, which is not normally a good course of action.

Differences between developers and the project leadership is generally the source of forks.

Tille's talk finished with an extension discussion on how leaders may or may not have control over project developers. Tille summarised it nicely by saying, "You have to be clever to be a leader."

A mission statement for the project and period reports, good communication, including in person or by telephone as needed, is important for the good functioning of a project, participants agreed.

Be nice to your people, Tille advised. Sometimes people in a position of leadership tend to become harsh toward others. If someone does a good job, say so. Positive reinforcement works. Don't reject things out of hand and be an example to follow. It's about taking a leading role, not about being a strong leader, he said.

Organizing events

Thursday, Alex "Tolimar" Schmehl hosted a discussion group entitled "Debian Events Howto," with an eye toward helping Debian Developers organise booths at conferences when invited.

The problem, Schmehl said, is that Debian is invited to participate at conferences and trade shows around the world, and currently is not able to keep up with the demand. The organization declined at least 15 invitations last year due to a shortage of volunteers ­­ an unfortunate situation, he said.

Schmehl said he learned how to organise a booth the hard way when organising one for LinuxWorld in Frankfurt some years ago. In spite of his lack of experience, he said, it went well. Organising a booth, he assured his discussion group's participants, is easy.

Start by brainstorming about what is needed. The short list is, in order determined by the room: T­shirts and merchandising goods, name tags to help gain the trust of people coming to talk to the booth attendants, something to look at other than the volunteers themselves, such as a demonstration computer, and a code of conduct for participants that includes a reminder that they are not there to hack in the corner.

DVDs and CDs are, of course, critical. People come to the booth for Debian, after all. Burned DVDs or CDs are fine, Schmehl said, and can even be burned while people wait, giving them more of a chance to talk about Debian and learn more about it while they wait.

Critically, have a place for technical support at the location. People often show up with laptops or full desktop systems seeking help with Debian, and a place in the booth that is relatively out of the way to help these people is important, he advised.

Someone asked how you could expect volunteers to attend with a code of conduct. Schmehl pointed out that developers already contend with a code of conduct to make Debian packages. The same idea applies to volunteering in a booth. There need to be some minimum criteria for being a booth volunteer, he said.

Another person suggested charging a nominal fee for CDs so that they last beyond the first hour of the show. Schmehl suggested making CDs for a minimum donation of zero cents.

Posters and flyers are important and help visitors remember Debian, Schmehl noted. Participants suggested that there should be a set of slides running in a loop.

If at all possible, keep at least one Debian Developer around at the booth, even if they step away for a few minutes. Some people, Schmehl said, come in from far away to see the Debian booth specifically in order to get their GPG key signed and get into the Debian keyring. Finally, Schmehl said, keep pens and papers around to write notes out for visitors.

Schmehl posted three photos of booths from different conferences. The first showed a group of people crowded around a laptop with a scantily clad woman as the background and he asked the audience to spot the problems with this booth. Being a geek crowd, the first problem noted was not the background but the fact that the people were facing away from the conference aisle. Among other problems was the lack of posters or other identifying marks around the booth.

Among the things he warned about was ensuring that everyone present at the booth have the passwords needed to use any demonstration computers to avoid the embarrassment of a visitor coming to the booth and wanting a demonstration, but no one present being able to actually give them one.

It is important for there to be more than one attendee so that people are able to take breaks from booth duty. Taking care of the volunteers with snacks and breaks, Schmehl pointed out, is important, lest the volunteers become unpleasant or aggressive over the course of the day. Also, Schmehl suggested ensuring that the volunteers be dressed appropriately for the type of conference or trade show. A businessperson­ oriented conference requires better dress than a developers' conference, for example. Find a place to keep volunteers' personal belongings out of sight. Figure out what people will want to know in advance, too, he advised. Check Schmehl's event howto and the Debian booth information page for more information about booth FAQs and both information.

Critically, a member of the group concluded, if you accept an invitation, show up.

In times to come DebConf 8 will take place in Argentina, and DebConf 9 will most likely be in the region of Extremadura, Spain, which is the only bidder for the 2009 conference to have put on a strong bid for the event.

Originally posted to Linux.com 2007-06-25; reposted here 2019-11-23.

conferences foss 2389 words - permanent link - comments: 0. Posted at 20:24 on June 25, 2007

OLS Day 4: Kroah­Hartman's Keynote Address

The fourth and final day of the 2006 Ottawa Linux Symposium saw the annual tradition of the closing keynote address, this year by Greg Kroah­Hartman, introduced by last year's keynote speaker, Dave Jones, and the announcement of the next year's speaker.

Dave Jones introduced Greg Kroah­Hartman of Novell's SUSE Laboratories, noting in his introduction, among other things, that in his analysis of kernel contributors, sorted by volume, Greg was high on the list.

He is responsible for udev, for which we should beat him later, Jones noted, adding Kroah­Hartman has spent two years working on a crusade to remove devfs from the filesystem (to great applause). Jones described Kroah­Hartman as approachable, and highly diplomatic, calling his approach to kernel communications diplomacy at its best. He punctuated this with a photo of Kroah­Hartman sitting at a table with his middle finger raised in a most diplomatic pose.

Kroah­Hartman began by noting that he is sure his daughter appreciates the photo.

Kroah­Hartman's keynote was entitled "Myths, Lies, and Truths about the Linux Kernel".

He started with a quote: "My favorite nemesis is that plug and play is not at the level of Windows." Surely, he said, such a quote must come from someone not educated in the ways of Linux. It must be from someone unfamiliar with the system and its progress. He went on to the next slide, and the quote became attributed. It was said by Jeff Jaffe, the CTO of Novell. Surely, then, continued Kroah­Hartman, it must have been said a long time ago! Slide forward: Jaffe said it on April 3rd, 2006.

Linux, said Kroah­Hartman answering the charge, supports more devices out of the box than any other operating system ever has. Linux is often even ahead of the pack, being the first operating system to implement both USB2 and bluetooth.

Linux, he continued, supports more different hardware platforms than any other operating system.

Someone shouted from the audience, 'what about NetBSD!' to which Kroah­Hartman retorted that Linux blew away NetBSD about three years ago.

Everything from cell phones to radio controlled airplanes to 73% of the top supercomputers in the world run Linux, he said. Linux scales.

Mr. Jaffe, he commented, should try his own product. We are doing something really good, he continued.

The "kernel has no obvious design", or roadmap, Kroah said, citing the next fallacy about Linux he intended to attack. Marketing departments like roadmaps and design paths, he said, but Linux does not provide them. Linux, he said, has created something no­one else ever has.

"Open Source development violates almost all known management theories." ­ Dr. Marietta Baba, Dean of the Department of Social Science at Michigan State University, he quoted.

He posted a slide showing a picture of a painting of a naked man from what appeared to be a religious context next to a squid­like animal with a number of weird anomalies, and a quote: "Linux is evolution, not intelligent design." ­ Linus Torvalds.

Linux started off being (barely) supported on a single processor, Kroah­Hartman noted. Then someone offered to fix it to run on another processor, and the process of evolution was well under way. Linux evolves by current stimuli, not by marketing department requirements, he said.

The only way to help the evolution, he continued, is to provide code to the kernel. Ideas without code backing them won't get far.

Linux implemented the POSIX standard six or eight years ago, he said. The evolution of the kernel is fast now, with around 6000 patches per major release. It is changing faster than ever, and becoming more stable than ever.

He moved on to the next myth, paraphrasing a common one: "the Kernel needs a stable API or no vendors will make drivers for Linux." For those who don't understand it, he said, an API is how the kernel talks to itself. He suggested reading Documentation/stable_api_nonsense.txt in the kernel source directory for more information on the topic.

Linus doesn't want a stable API, he said. The USB stack, for one, has been re­implemented three times so far. Linux now has the fastest USB stack available, limited only by the hardware. Linux is lean and complex.

Windows, too, has rewritten the USB stack 3 times, he noted, but all three of them have to stick around in the system to support the various and uncontrolled old independent drivers kicking around. Linux' native support for drivers means that independent drivers are not a problem and therefore that the API can be rewritten as needed without keeping older versions kicking around in the kernel. Windows has no access to the drivers and cannot adjust the API as a result.

The next myth he poked a hole in is the notion that, "my driver is only for an obscure piece of hardware. It would never be accepted into the mainline kernel." We have an entire architecture, Kroah­Hartman countered, being used by just two users. There are lots of drivers, he said, that have but one user.

A company contacted Kroah­Hartman, he said, to find out about putting a driver into the kernel for an obscure task that they needed to do. The driver was put in, and several other companies that also had to do similar tasks no longer needed to maintain their own versions. The contribution became useful on a more widespread basis in a way that could not have been foreseen and is now used to support thousands of devices. Just get your code in, he said, people might actually use it.

He went on to address the problem of closed source and binary kernel modules. Every IP lawyer he has talked to, he said, regardless of who they work for, has agreed on one fundamental point: "Closed Source Linux kernel modules are illegal." The lawyer's can't say it in public, he said, but he can.

He suggested not asking legal questions on the Linux Kernel mailing list, asking: would you ask the list about a bump on your elbow?

Kroah­Hartman explained how closed source modules included in Linux distributions cause problems and prevent progress, holding back the kernel included with the distributions. Closed source Linux kernel modules are unworkable, he said.

Companies that have intellectual property they say they want to protect, he said, should not use Linux.

When you use Linux, you should follow its rules. You are saying that your IP, he said, trumps the entire Linux community, that you are more important than everyone else. Closed source Linux kernel modules are unethical, he said.

He suggested to companies that they read the kernel headers for who owns the copyright on various parts of Linux. He noted that you will find companies like AMD, Intel, and IBM represented there. Do you really want to skirt under these companies' lawyers?

Novell, on February 9th, 2006 said in an official policy statement that it will no longer ship non­GPL kernel modules. It is as good a statement as is possible for any such company, he said, noting the lack of reference to the fundamental legal reason for avoiding it.

Someone shouted out asking about Nvidia. Nvidia, he said, and ATI, and VMWare all violate the GPL. But they do it cleverly. They write their code against the kernel source, but they don't link it, the part that violates the GPL. They force the end use to do that, preventing them from redistributing their builds.

VMWare, he commented, is not open source.

He moved on to the next myth: it is hard to get code into the main kernel tree.

If there are 6000 changes per release, someone is getting it in, he said. All that you need to do is read the Documentation/HOWTO file in the kernel tree and know what you are doing.

There are a number of ways to start working on the kernel, he explained. The first and easiest is to check out the Kernel Newbies project. It is available as a wiki and webpage at kernelnewbies.org. The second way, he said is to join the Kernel Newbies mailing list. He said it is virtually impossible to ask a stupid question on that list. Just read the recent archives so you don't ask a question that has just been asked. The third and final way is to get on the Kernel Newbies IRC channel. He said there are around 300 users there and the channel is usually quiet, but not to worry, people will generally answer your questions.

But when seeking help, he cautioned, be prepared to show your code. People are not inclined to help people who are working on closed source code.

The next step up from Kernel Newbies is the Kernel Janitors project. This is a list of things that need doing. Check the list, see if you can knock some items of it. Getting a kernel patch accepted is a good feeling, he said. A lot of people started this way.

The next step up is to join the Linux Kernel Mailing List, which has around 400 to 500 messages a day. Don't feel bad about not reading every message, he said, everyone filters. The only person in the world, he said, who reads all of the messages is Andrew Morton. Subscribe, and ask questions there, he suggested.

There are not very many people, he admitted, who review the code that comes in. But when someone reviews your code and gives you feedback, they are right. They are not the bad people, he warned, the people submitting the bad code are. Kroah­Hartman said he tried it for a week. He said it made him grumpy.

He suggested that people wishing to contribute should spend a few hours a week reading existing code.

You must learn to read music, he analogized, before you can write it.

If you can't contribute by writing code, what can you do, he asked.

It is not possible to do comprehensive regression tests on the Linux kernel, he said. You cannot test what happens if you add this device and remove that device in this order over this time. It just doesn't work.

The best test, he said, "is all of you". Test Linus' nightly snapshots, he urged.

If you find a problem, post it to bugzilla.kernel.org. Then bug him until he feels bad, so that it gets fixed.

Also, he said, try and test the ­mm kernel tree.

In conclusion, he said, Linux supports more devices than anyone else. Linux progresses by evolution, not design. Closed source drivers are illegal. Linux can use help with reviews and testing.

And most importantly, he finished, total world domination is proceeding as planned.

With that, he moved on to taking questions.

The first question stated that the timeliness of device support is as important as the number of devices supported.

He responded that in order for drivers to come out quickly, it is important for hardware vendors to get involved.

The next person up to the microphone commented that if someone wants to learn how something works in the kernel, write a design document for it.

Kroah­Hartman responded that design documents need to go directly into the source code as it is the only way they will be kept up to date. There is a movement afoot, he said, to get OSDL to hire a full time documentation guy. He suggested that anyone present who works for a member of OSDL suggest to their employers that they contribute funds for this purpose.

Alan Cox came up to the mike to ask a question. Is Microsoft the Borg as the media likes to portray it, he asked, or is it really Linux that is the Borg?

Kroah­Hartman answered that Microsoft is buckling under its own load. By size, Microsoft is the Borg, but by function, it is Linux. But to him, he said, it is not a matter of us versus them. He doesn't mind the competition.

To sum up

Following the Greg Kroah­Hartman's keynote address, the annual door­prize draw took place. This year, the CELinux forum offered a development platform Linux PVR from Philips, and 3 Linux­based Nokia 7700s.

Red Hat contributed 2 laptop bags and 3 red hats to the draw, and IBM threw in a couple of loaded Apple Powermac G5s. The hats were distributed by Alan Cox, with Greg Kroah­Hartman's young daughter picking the numbers.

Conference co­organizer Craig Ross closed out the formal part of the conference with a series of announcements.

The first was that a US passport had been found, and would the owner please come forward to claim it.

No­one did, but it led to lots of laughter.

Ross announced next year's keynote speaker will be SCSI maintainer James Bottomley, without naming him. He held up a bow­tie and suggested that all attendees should wear one next year, as the charismatic Bottomley is known for, in his honor.

Ross continued, reminding all attendees that the closing reception at a nearby pub can only be entered with the help of official conference id. Guests and spouses can come if they have an event pass, available through registration, Ross said. Girlfriends, wives, and family met along the way between the conference center and the pub would not be welcome. People may not sleep in the fountain at the pub, and should not attempt to climb the fake balcony inside the pub. If you do misbehave in public, he grinned, at least take off your conference id!

During closing announcements, conference organizer Craig Ross asked the assembled crowd how many were attending OLS for the first time. The number of hands raised in response to this question was quite low, certainly well under a quarter of attendees. It was the fourth OLS that I attended and I am eagerly anticipating next year's.

As for who came to the conference, contrary to what one might expect at an event called Ottawa Linux Symposium, the number of long­haired, unshaven, sandal­wearing geeks is actually very low, though the number of people meeting any one of these criteria on its own is somewhat higher. Judging by the name­ tags, the number of people attending on their own tab as opposed to being sponsored by their employer is quite low. Most people, though certainly not all, have a company name on their tags. What conclusions can be drawn from this I leave to you to decide.

This year's OLS saw approximately 128 sessions, BOFs, and formalized events, up sharply from 96 a year ago. Sessions started at 10am and, except for a break for lunch, went on until 9 pm most days split between 4 session halls. I took 43 pages of handwritten notes, up one from last year, from attending 26 sessions, up by three over last year. Of these, I covered a mere 12 in these summaries. Once again, I hope you enjoyed this taste of OLS and I hope to see you there next year!

Originally posted to Linux.com 2006-07-23; reposted here 2019-11-23.

conferences foss 2496 words - permanent link - comments: 0. Posted at 20:29 on July 23, 2006

Day 3 at OLS: NFS, USB, AppArmor, and the Linux Standard Base

The third of four days of the eighth Ottawa Linux Symposium saw a deep discussion on the relative merits of various network file systems in a talk called "Why NFS sucks", a tutorial on reverse engineering a USB device, an introduction to SELinux rival AppArmor, and an update on the status of the Linux Standard Base, among other topics of interest

Why NFS sucks

Olaf Kirch gave his talk entitled "Why NFS sucks", following a pattern of talks entitled "Why _ sucks" at this year's OLS, on the topic of NFS and its many less successful rivals.

He started by commenting that it was really a talk about NFS and what a wonderful filesystem it is. He meant it just as seriously as he the original title of the talk.

Everybody complains about NFS, Kirch stated. To prove his point, he asked the audience if anyone thinks NFS is good. Three people raised their hands in an audience of more than a hundred. The SUSE Linux distribution's bugzilla had "NFS sucks" as a catch­all bug for gripes for a while, he commented, though it was recently removed.

In the early 1980s, Kirch stated getting a little more serious as he began discussing the history of NFS, Sun had a limited network filesystem called RFS. In 1985, Sun released NFS version 2 along with SunOS 2, with no sign of an NFS version 1. In 1986, Carnegie Melon University and IBM created AFS.1988 saw the creation of Spitely NFS, which was NFS version 2 with cache consistency. It was another six years before the next major development on the time­line. In 1994, crash recovery was introduced for Spitely NFS, and that same year Rick Macklem released Not Quite NFS (NQNFS) along with 4.4BSD. In 1995, NFS version 3 was released as, as Kirch put it, general wart removal. In 1997, Sun released WebNFS, intended to be as big as HTTP, but it didn't even fizzle. In 2002, NFS version 4, the 'Internet filesystem' was released.

Kirch went on to explain the basics of NFS version 2. NFSv2 is a stateless protocol. This allows either party to carry on as if nothing happened after a crash and reboot or restart. If an NFS server crashes, the client just has to wait until the server comes back up, and then it can continue as it was. If it were stateful, every client would need a state recorded and tracked by the server. A stateless protocol scales better. NFS can export almost any filesystem as a network filesystem. It is an important strength of NFS. It is not filesystem specific.

Files need a file handle that is valid for the entire life of the file, Kirch stated. This works well with inode tables, but new filesystems are more complicated. Directories can reconstruct a chain of entries using the parent directory (..) entries. Files are pointers to inodes and directories. With NFS, these ids can change.

NFS listens on port 2049. It needs to talk to mountd to get the file handle to mount a directory, portmap to get a port to connect to, another protocol to perform file locking, another to recover from a failure in a stateless state, another to recover locks after crashes... Kirch expressed some exasperation with an old NFS attitude from versions prior to four that each new feature requires its own protocol. Version four, he noted, mostly gets it right.

NFS version 2, Kirch commented, is notorious for having its implementation details passed on primarily by oral tradition rather than meaningful specs. He described attribute problems that can result in client/server confusion because of different common implementations.

Renaming or deleting an open file should allow continued writing of that file. Over NFS versions 2 and 3, removing or renaming a file can have, as Kirch put it, interesting results. In NFS version 4, this is solved with "silly rename" which turns the removed file into a dot­file (.nfs.xxxxxx), though this file can also be deleted. The dot­file is then only removed once nothing has it open any more.

NFS versions 2 and 3 cannot handle simultaneous access to a file properly, he cautioned. The results can be gabled. NFS version 4 also has the problem, but will give an error message warning that there could be trouble.

Another problem inherent in NFS is the lack of file security. The client machine tells the server the user and group ids of the user trying to access a file on the server, and the server agreeably goes along with the information, trusting the client fully. A number of workarounds have been proposed and implemented over time, but none have really caught on.

NFS also has the nasty habit of saturating networks. Prior to version 4, NFS was entirely a user datagram protocol (UDP) based protocol. This is a lossy protocol that can overwhelm a network if it gets too busy.

Some kind of congestion avoidance was needed, Kirch concluded. It needs to be smarter about re­transmission. The solution he offered is TCP, which NFS version 4 now uses exclusively. TCP is a stateful network protocol that ensures packets reach their destination and retransmits only if the packets were lost.

Kirch noted that there are a variety of alternatives to NFS, and summarized it as picking your poison. He listed a number of the alternatives, a long with brief descriptions of them and then a more detailed list with their strengths and their flaws:

IBM open sourced AFS rather than continuing to maintain it as an end­of­life solution for it.

DFS came from the Open Group and is either dying or is altogether dead.

CIFS is a surprisingly healthy network file system.

Intermezzo was nicely designed, but went away.

Coda was written by Peter Braams, who subsequently moved on to another project. It's also kind of dead.

Cluster filesystems exist, Kirch noted, but generally live on top of either NFS or CIFS.

NFS with extensions, called pNFS,stores files and meta­data on separate servers.

Kirch, having listed them, got a little more in depth about a few of them.

AFS he called "Antiques For Sale" and said the filesystem is in maintenance mode. It relies on Kerberos 4 for security. The code itself is difficult to read, being a mass of #ifdef statements used to make it portable across multiple platforms. It is not interoperable, and cannot function on 64­bit platforms.

CIFS he called the "Cannot Interoperate File System". It is a stateful, connection based network file system. He described the protocol as a jungle, saying he couldn't speak about it any further because it is just "horrible". Its biggest problem, he noted, is it is controlled by Microsoft and that is its main barrier to adoption. Users want to know that it will still be there tomorrow, he added.

NFS version 4 he described as "Now Fully Satisfactory?" It's an Internet­oriented filesystem that has got a lot of things right. It interoperates with Windows, is on a single, firewall­friendly port (2049), and a flaw in callback code that opened another port has even been fixed in version 4.1. It is entirely TCP, with UDP now a thing of the past.

Basics on reverse engineering a USB device

I attended one tutorial session: Reverse engineering USB drivers for compatibility, by F/OSS consultant Eric Preston.

He began with a standard disclaimer ­­ "This is for educational purposes only."

The premise is simple: USB devices often lack vendor support. The vendors don't care about Linux, and their excuses range from nobody uses Linux to USB­IDs are intellectual property to who cares about USB, anyway, Linux isn't on the desktop.

What do we do, Preston asked We can wait for support from the device vendors or the community at large, or we can do it ourselves.

The mission, therefore, Preston stated, is to figure out how existing drivers work in order to write drivers ourselves. The goal is to support cool hardware, get more people involved in writing userspace drivers, and remove barriers for less experienced developers; make driver writing fun and less tedious.

The tools needed to reverse engineer a USB device, Preston explained, are, primarily, usbsnoopy and Windows. Using Windows, where most drivers are, and usbsnoopy, it is possible to see the interaction of packets between the USB device and the device driver in the operating system. It creates a log which can then be decoded into the functions.

To figure out what is what, simple tasks can be performed in Windows on the USB device and the interaction monitored and logged. Then the USB specification can be consulted and the log can be manually decoded, eventually, after months of work, resulting in some idea of what is happening.

With the help of VMWare or other virtualization programs, the painfully frequent reboots involved in the process can be avoided and Linux tools can be used in place of usbsnoopy, including one using a Linux program called usbmon in combination with Linux network snooper ethereal to monitor USB device traffic with the ethereal interface called ethereal dissector. Preston is writing it, but warned that the code is very messy and is not something that is quite ready to yet be shared.

The drivers themselves can be written with the help of libusb entirely in userspace. With the advent of libusb, it is not longer necessary to write kernel drivers to run USB devices. Preston did not actually write a driver in the tutorial, but did show attendees in the beyond­capacity packed room the path to do so.

AppArmor vs SELinux

Among the interesting BOF sessions of the evening was one called The State of Linux Security, led by Doc Shankar of IBM.

He invited several security experts to give brief updates on their security projects, largely concentrated around SELinux, whose esoteric nature is completely over my head. But one brief presentation particularly caught my attention.

Crispin Cowan of Novell presented recent Novell acquisition Immunix' AppArmor Linux security suite which appears to be an alternative to SELinux.

Its simplicity and logic led me to wonder why it was I had never heard of it before. The long and the short of it is it is a security tool that restricts access to services and applications only to the privileges, including specific root privileges, and files it needs to perform its duties ­­ and it's capable of learning what those are without being explicitly told by watching the programs to be defended perform their tasks and logging what they do. Cowan did a brief demonstration, showing how Apache could be tied down with AppArmor in just a couple of minutes, preventing a root hole in a sample web page from being exploitable by virtue of not allowing the resources needed to exploit it.

How can you beat that?

Update on the Linux Standard Base

The last session I attended on the third day was the obligatory annual Linux Standard Base update, presented as a BOF by Mats Wichmann.

Since the last OLS, Wichmann says that the Linux Standard Base version 3.1 has been released ­­ in two parts. The first part, the LSB core, was released in November of 2005, with the second part, the modules, being released in April of 2006. It was split into two to allow it to meet International Standards Organization (ISO) deadlines to become an ISO specification.

As a result of the ISO involvement, there are now two LSB streams. One is a relatively frequently updated version administered by the Linux Standard Base project itself, the other is the ISO specification. The two specifications are essentially identical.

The ISO specification exists mainly to allow governments to specify it as an ISO standards compliance when releasing contract tenders for technology, which would allow Linux Standard Base as a requirement. ISO standard 23­660 provides this.

The Linux Standard Base documentation is released under the Free Documentation License, but for the ISO, it is effectively dual­licensed documentation to allow the ISO to retain it as an official standard under their direction.

Asked how hard it is to keep the ISO version of the LSB standard up to date, Wichmann replied that it is a concern. The specifics of the specification cannot be changed all the time, even though the LSB project itself is evolving. The ISO specification can be kept up to date with occasional errata report filings, but the update cycle with the ISO is approximately 18 months. As a result, the ISO spec will inevitably lag behind the LSB specifications.

The next question asked who gets certified with the LSB. Wichmann answered that any company that has an economic interest in certifying its distribution or software package will do it, if there is a return. In theory, anyone can get any software certified, he noted, and there is no reason that companies cannot keep their software compliant even if they don't go through the process of actually being certified. Questions on how conformance is verified and how long it take to do were asked. It's a self test, Wichmann admitted. Labs are too expensive, but tools are available for anyone to download and run against the software they would like to check for compliance. If there are no errors, the tests can easily be completed in a single day. If there are errors, naturally it will take longer. To become certified, the logs of the tests need to be submitted.

It was noted during the session that the Linux Standard Base's role is more or less passive. It does not mandate standards that are not generally already the norm. Its mandate is to document, not to push, even if better systems exist than the ones that are in use.

Next up

The last day of the conference promises to be exciting, with Greg Kroah­Hartman's keynote address. Stay tuned!

Originally posted to Linux.com 2006-07-22; reposted here 2019-11-24.

conferences foss 2322 words - permanent link - comments: 0. Posted at 13:51 on July 22, 2006

Day two at OLS: Why userspace sucks, and more

OTTAWA ­­ Day two of the eighth annual Ottawa Linux Symposium (OLS) was more technical than the first. Of the talks, the discussions on the effects of filesystem fragmentation, using Linux to bridge the digital divide, and using Linux on laptops particularly caught my attention, but Dave Jones' talk titled "Why Userspace Sucks" really stole the show.

The first of these talks, "The Effects of Filesystem Fragmentation," was led by Ard Biesheuvel, a research scientist who works on Personal Video Recorders (PVR) in the Storage Systems & Applications group of Philips Research. Biesheuvel explained that a PVR operates by recording a television signal to a box, and employes metadata to describe what is available. It has some degree of autonomy in what it does, and does not, record by creating a profile of what the user likes to watch, or recording something that a friend's PVR is recording. It records a lot, and it can often record more than one TV show at a time.

With the PVR explained as the demonstration platform, Biesheuval's talk carried on to filesystem fragmentation. Biesheuval says that the theory is that fragmentation is generally expressed as a percentage, but a percentage is not clear. A new metric must be created for determining the impact of filesystem fragmentation. A useful metric is relative speed.

Biesheuvel showed a slide of a diagram of a hard drive platter. It showed how data is stored on tracks ­­ rings of data around the platter ­­ and each track is offset from the next by an amount appropriate for allowing the disk head to leave one track and get to the next, arriving at the right point to continue.

A gap, he explained, is the space between segments of a file not belonging to the file. Fragments are the non­contiguous pieces of the same file. Hard drives generally handle small gaps by reading through the data on the same track through the gap, while on larger gaps the drive head will seek (travel) to the track of the next fragment and then read it. Ideally, he says, there will be one seek and one rotation of the drive per track of data belonging to the file being read.

With the background explained, he described the tools for his tests. The first, called pvrsim, operates by simulating a PVR. It writes files between 500MB and 5GB in size to disk, two at a time, endlessly emulating the life­cycle of a PVR. It deletes recordings as space is needed for new ones by a weighted popularity system.

The next tool is called hddfragchk, which is not yet available for download, but Biesheuval says it will be made available eventually. The hddfragchk utility shows the hard drive as a diagram of tracks with the data from each file assigned a color. He demonstrated animated GIFs of hddfragchk in operation, showing the progression of the filesystem fragmentation as pvrsim runs.

The first filesystem was XFS, which showed clear color lines with small amounts of fragmentation visible as the files moved around the disk in the highly accelerated animation. The other filesystem he showed was NTFS, which resembled static as you might see on a television that is not receiving signal, as the filesystem allocated blocks wherever it could find room without much apparent planning.

Biesheuvel then went on to show a graph showing an assortment of filesystems and their speed of writing over time. All filesystems showed a decline over time, with some being worse than others, though I did not manage to scribble down the list of which was which.

Relative speed is highly filesystem dependant, he concluded. Filesystems should maintain the design principle that a single data stream should stick to its own extent, while multiple data streams must each be separately assigned their own extents.

Extents were not explicitly explained during the talk, it can be deduced from the discussion that they are sections of the filesystem pre­allocated to a file. He expressed optimal hard drive fragmentation performance mathematically, and stated that equilibrium is achieved when as many fragments are removed as are created.

Biesheuval also says that there is a sweet spot in fragmentation prevention with a minimum guarantee of five percent free space. At five percent free space, fragmentation is reduced. Ultimately, he says, relative speed is a useful measure of filesystem fragmentation. The worst filesystem performers do not drop below 60% of optimal speed.

Why userspace sucks

Dave Jones, maintainer of the Fedora kernel, gave his "Why Userspace Sucks ­ (Or, 101 Really Dumb Things Your App Shouldn't Do)" talk in the afternoon for a standing­room only crowd. Jones' talk focused on his efforts at reducing the boot time in Fedora Core 5 (FC5), and the shocking discoveries he made along the way.

He started his work by patching the kernel to print a record of all file accesses to a log to look for waste.

He found that, on boot, FC5 was touching 79,000 files and opening 26,000 of them. On shutdown, 23,000 files were touched, of which 7,000 were opened.

The Hardware Abstraction Layer (HAL) tracks hardware being added and removed from the system, to allow desktop apps to locate and use hardware. Jones says that HAL takes the approach "if it's a file, I'll open it." HAL opened and reread some XML files as many as 54 times, he found. CUPS, the printer daemon, performed 2,500 stat() calls and opened 500 files on startup, as it checked for every printer known to man.

X.org also goes overboard, according to Jones. Jones showed that X.org scans through the PCI devices in order of all potential addresses, followed by seemingly random addresses for additional PCI devices, before starting over and giving up. He paid special attention to X fonts, noting that he found that X was opening a large number of TrueType fonts on his test system.

To see what it was up to, he installed 6,000 TrueType fonts. Gnome­session, he found, touched just shy of 2,500 of them, and opened 2,434 fonts. Metacity opened 238, and the task bar manager opened 349. Even the sound mixer opened 860 fonts. The X font server, he found, was rebuilding its cache by loading every font on the system. He described the font problems as bizarre.

The next aspect of his problem identification was timers. The kernel sucks too, he said: USB fires a timer every 256 milliseconds, for example. The keyboard and mouse ports are also polled regularly, to allow support for hot­pluggable PS/2 keyboards and mice. And the little flashing cursor in the console? Yes, its timer doesn't stop when X is running, so the little console cursor will continue to flash, wasting a few more CPU cycles.

Jones says that you don't need the patched kernel and tools that he used to do the tests. Using strace, ltrace, and Valgrind is plenty to do the work to get rid of waste, says Jones.

An audience member asked, after fixing all these little issues, how much time is saved? Jones replied that roughly half the time wasted by unnecessary file access was saved. However, the time saved is taken up by new features and applications that also consume system resources. As a result, says Jones, it is necessary to do this kind of extensive testing regularly.

Another attendee asked, how can we avoid these problems on an on­going basis? One suggestion is to have users who don't program, but wish to be involved in improving Linux, take on the testing work. The last question of the question­and­peanut gallery answer session at the end of the talk asked if KDE was as bad as GNOME in these tests. Jones replied that he had not tried.

As the Q&A continued, the session became more of a Birds of a Feather (BoF) than a presentation. The back­and­forth between Jones and the audience had most of the packed room in stitches most of the way through.

Bridging the digital divide

In the evening, I attended a BoF session run by David Hellier, a research engineer at the Australian Commonwealth Scientific and Research Organization (CSIRO) on the topic of bridging the digital divide. His essay on the topic won him an IBM T60 a day earlier.

Hellier says he would like to use Linux and Open Source to help bring education to the millions of extremely poor people throughout the world. In Africa alone, 44 million primary aged children cannot get a basic education.

A participant mentioned that there are 347 languages in the world which more than a million people speak, not all of which have translations of software, though some even smaller ones have translated versions of Linux. Another person pointed out that translating an operating system and applications is only part of the battle. The important part is translating the general knowledge associated with it. Tools that are translated must also be available off line. Remote, poor communities are unlikely to have much in the way of Internet access even if they are lucky enough to have electricity.

Linux developers, Hellier says, are largely employed by big companies. As such, they are in a position to suggest ways to get their companies to help close this digital divide.

How is it different from missionary work, one person asked, to send people with these unfamiliar tools to the depths of the developing world? Hellier responded that the key difference is that governments all over the world are screaming for all the help they can get.

Major software companies are going to the developing world to evangelize their wares, however, and it is important to counteract this effect. The ultimate goal is to help people help themselves, noted Hellier.

The discussion moved on to ask how to address this topic on a more regular basis than at conferences once every year or two in a BoF session. Hellier started a wiki for discussion on bridging the digital divide prior to the start of the session at olsdigitaldivide.wikispaces.com and it was suggested that an IRC channel be created for further discussion, a method, noted an audience member, used successfully by kernel developers for years; so an IRC channel, #digitaldivide was created on irc.oftc.net.

Hellier also recommended looking at a number of tools, including the Learning Activity Management System, Moodle, and the sysadmin­free usability of Edubuntu.

Linux on the laptop

The last session I attended yesterday was the BoF session run by Patrick Mochel of Intel on the topic of Linux on the laptop. It was an open BoF with no specific agenda and no slides. Mochel noted the presence of several relevant people to the discussion, including some developers of HAL, udev, the kernel, ACPI, and Bluetooth.

The discussion began with talk about suspend and resume support on recent laptops and the weaknesses therein. Mochel noted that while suspend and resume support is a nice thing, it does not buy you anything with the most critical aspect of a laptop ­­ battery life. This brought about a lengthly discussion of various things that waste electricity in a laptop. The sound device, for example, should be disabled when it is not being actively written to and network devices that are not being used should be disabled to conserve power.

The discussion evolved quickly, turning next to network states. It is possible, argued Mochel, to have the network device down until a cable is plugged into it, in the case of wired networking, and only come up when a cable­connected interrupt is received. This can be important because a network card that is on is wasting power if it is not connected to a network.

Removing a kernel module does not necessarily reduce power to a device, someone noted. Fedora only removes modules when suspend cannot be achieved without doing so, commented another.

Another participant asked whether there's any documentation on how drivers should work with regards to power management? The answers were less than straightforward, with one person asking if there's documentation on how drivers should work for anything at all. Another suggested posting a patch to the Linux kernel mailing list and seeing the reaction.

The topic of tablet PCs and rotating touch screens was brought up. Touch screen support has been improving over the last few years, it was noted, but mainly in userland. Someone commented that the orientation of the rotating monitors on tablets are determined by differential altimeters sensing air pressure differences between the ends and determining orientation as a result.

Rotating screens are not only a problem for X, says Linux International's Jon 'maddog' Hall, but for consoles as well. Pavel Machek replied that 2.6.16 and newer kernels allow command line tools to rotate the console.

The discussion then moved into a discussion of biometrics in light of the finger print scanner present on many newer IBM laptops. Microsoft, came a comment, is pushing for a biometric API in its next version of Windows. A biometric API exists for Linux, and sort of works. It supports the fingerprint scanner by comparing the image taken by the scanner to ones stored, a solution noted by others present to be less than secure since the image is not hashed ­­ something that has been done for user passwords on Linux for years.

The second of four days of the conference saw more technical talks than the first, with Dave Jones' talk on userspace being the highlight of the day.

Originally posted to Linux.com 2006-07-21; reposted here 2019-11-24.

conferences foss 2251 words - permanent link - comments: 0. Posted at 13:47 on July 21, 2006

First day at the Ottawa Linux Symposium

OTTAWA ­­ The 8th annual Ottawa Linux Symposium (OLS) kicked off Wednesday in Ottawa, Canada at the Ottawa Congress Centre. Jonathan Corbet, co­founder of Linux Weekly News, opened the symposium with The Kernel Report, an update on the state of the kernel since last year.

Corbet started his talk with a brief recap of the Linux kernel development process. According to Corbet, Linux kernels are now on a two­ to three­month release cycle. The current Linux kernel version is, with expected shortly. All 2.6.x kernels are major releases, with 2.6.x.y kernels being bug­fix releases.

Corbet says that there will not be a 2.7 kernel tree for the foreseeable future, not until there is a major, earth­shattering change that will break everything ­­ and thereby require an unstable kernel tree.

The major release cycle developers use now takes approximately 8 weeks. In week 0, new features are included in the kernel in what is termed the merge window. This is typically in the form of several thousand kernel patches. This process ends when Linus decides there has been enough and the merge window is decreed closed.

The kernel then goes into release candidate mode, with effort going into stabilization and bug­fixing.

Release candidate (­rc) kernels are released periodically and by the theoretical 8th week (which usually is a bit later), a major release is released. Subsequently, all bug­fixes and patches to that kernel come in the form of 2.6.x.y version numbers.

The process of merge windows started a year ago, Corbet said, and the result has been the relative predictability of stable kernel releases. New features come out quickly instead of spending years in queue; Distributions are keeping up with more current kernels than they had been.

Corbet showed a graph of kernel patches over time, showing how the number of patches going into the kernel has changed from a more or less straight line to a stair case pattern, with the help of the merge window release cycle now in use.

The quality of the new kernel release cycle has most people happy, Corbet said emphasizing "most." The perception among some, he said, is that the quality of the kernels is on decline with too much emphasis on few features, and more bugs going in than coming out.

Corbet says that there's not a firm kernel bug count. As the number of users increases, he noted, so to does the number of bug reports. More code means more bugs, even if the proportion of bugs (bugs per thousand lines of code) drops.

Many bugs being fixed are very old bugs, Corbet says. Of two recent security fixes, one was for a one year old security flaw, and the second was for a three year old security flaw. Fewer bugs, and a single bug database to centralize kernel flaws, would be nice to have; and Corbet says that he expects that progress on that front is on the way. Corbet also pointed out that bug tracking isn't very helpful if the bugs don't get fixed.

Kernel developers often lack the hardware needed to fix bugs, and so the bug­fix process can require extensive back and forth exchanges of tests and results. This process is very slow, and often times one party or the other gets bored of the process and the bug remains in the kernel.

Another problem, Corbet says, is that there is no boss to direct bug fixing efforts unless there is a corporate interest in fixing a bug somewhere and that company puts the resources in to getting specific bugs fixed. Kernel developers are also often reluctant to leave their little corner of the kernel, he noted.

Introducing bugs in the first place is becoming harder, said Corbet. Better APIs and more use of automated bug­catching tools are improving the situation. It has also been suggested that the Linux kernel do major releases that are strictly about fixing bugs, not adding features. Another suggestion floating around is the assignment of a kernel bug­master. It would need to be a funded position, he noted.

Future kernel development

Corbet went on to summarize the major changes in the kernel since this time last year, when kernel 2.6.12 was current. Among other specifics about the kernels released since, Corbet noted that Linux kernel 2.6.15 was released January 2nd, 2006, 15 years to the day after Linus bought his first development box to begin work on the kernel.

The kernel has a 15­year history, but it doesn't have a five­year roadmap. Corbet says that the kernel has no specific timetable for features, or even a specific list of features that will be implemented, and that there's no way to force development of any particular feature without specific funding. No one knows what hardware will be out down the road, or what users will want. What future we can predict, though, is the next kernel release.

The kernel 2.6.18 merge window has closed, Corbet says, and a number of changes will be in the upcoming release, including a new core time subsystem, and a massive patch set for serial ATA including error handling, and a kernel lock validator. The latter of these changes is designed to help with kernel development. Locks are designed to keep threads apart, he explained, and they're difficult to get right. He also noted that devfs would be removed from the kernel in 2.6.18, which generated widespread applause.

Corbet went on to discuss challenges with integrating virtualization support with the kernel, noting that the various virtualization programs should not need to maintain their own trees and need to come up with a uniform set of patches into the kernel, to avoid each having its own set. He also spent some time discussion kernel security in the form of SELinux, which he said is acquiring real administrative tools, and AppArmor, SELinux competition recently purchased by Novell.

The Linux kernel is very unlikely to switch to GPL version 3, Corbet noted at the end of his excellent 45 minute talk, as changing the license would require a consensus of all kernel developers, who still individually hold the copyrights on their little bits of the code. This would not be helped, he noted candidly, by the fact that some are dead.

The slides from Corbet's talk are available here.

Fully automated testing

Later in the day, I attended a talk by Google's Martin J. Bligh entitled "Fully Automated Testing." Bligh started by asking: Why? Automated testing, he says, is not just necessary because testing is a boring occupation. With the kernel 2.6 tree's new development cycle, the rate of change of the kernel is quite scary. Linux is very widely used now, and old methods of bug reporting are no longer adequate.

It used to be that kernels were pushed out, and the developers could wait for feedback from users. Those days are over. Bligh noted that machines are cheap, compared to people, and automated testers don't disappear. Is automated testing the solution to world peace and hunger? No, but Bligh says it's part of a solution to kernel bugs. Automated bug testing requires more coders and more regression testing.

The testing is done upstream, he says, so the testing can be done prior to releases instead of after them.

The fewer users exposed to a bug, the less pain caused. New code in the kernel can be pushed out of the tree until it's fixed without causing additional problems, if bugs are found, before other features depend on the new code. The earlier bugs can be found, Bligh says, the better.

Bligh noted that extensive automated testing is done on the kernel twice a day. The test system is written in Python, and he discussed at length why Python was chosen as the language for the system. He also spent some time showing the audience test output from the system, and discussed why other languages were not suitable for the test system.

He described Python as a language that meets the requirements for the task because it is a language that is easy to modify and maintain, that is not write­only, that has exception handling, is powerful but not necessarily fast, is easy to learn, and has wide libraries of modules to leverage.

He described Perl as a write­only language, and said that while people can do amazing things with the shell, it is not appropriate for the purpose. He said with a grin that he has a lot of respect for what people can do in the shell, but none for choosing to do it that way in the first place.

One thing he particularly likes about Python, he said, is its usage of indentation. Unlike other languages, Bligh noted, Python is read by the computer the same way as it is read by a person, resulting in fewer bugs.

Bligh says test.kernel.org is better than it was before, but it is a tiny fraction of what could be done.

Blight says that kernel testing would be improved by an open, plugin­able client to share tests. He also called for more upstream testing, and for companies to get involved in testing.

Is it cheaper, Bligh asked, for a company to debug code itself or to help track down bugs for the community to fix before it affects the company? He described the current automated test efforts as the tip of the iceberg. He also encouraged attendees to get involved by downloading the test harness and reading the wiki to get started.

Battery life

Len Brown of the Intel Open Source Technology Centre, and maintainer of the kernel ACPI subsystem, led the next session, "Linux Laptop Battery Life". By their nature, Brown says, laptops are the source of most innovation in the area of power management.

The first part of his talk centered around how to measure how much power a laptop is using in the first place as a baseline. The first method he suggested is to use an AC watt meter with the battery out of the laptop, if possible. It's a $100 test, he said, but fails on the AC to DC power conversion and on the fact that most laptops are aware that they are plugged into AC, and therefore unlimited, power and disable some power saving measures used when a battery is active.

The second method is fundamentally the same, but with a DC watt meter to eliminate the power loss caused by the power brick from the math. A more expensive but somewhat more accurate method, he said, is to set up a DC input system through the laptop battery leads.

The simplest solution though, he pointed out, is to simply use the laptop's built­in information about the battery. Simply run a fully charged battery to fully depleted and see how long it takes. Compare that to the wattage of the battery and you have your power usage. For example, he said, a 53 watt­hour battery that runs a laptop for one hour means the laptop is running at 53 watts. If it lasts two hours, it is only using 26.5 watts, and so forth.

He announced the release of a GPLed program he has been working on which, he emphasis­ed, does not do benchmarks, called the Linux Battery Life Tool Kit. The code is available on his directory on kernel.org.

The first test of the testing program, he says, is the idle test. Idling is a most basic function of a laptop. It even idles between key presses on the keyboard, he noted. The next test he said is a Web read test, which looks at a different Web page every two minutes. He described it as an idle test with eye candy and said the results are indistinguishable from idle.

The next test is an OpenOffice.org­based test that specifically requires version 1.1.4 of OpenOffice.org.

The next two tests are DVD playing with Mplayer and a software developer workload test, consisting of browsing and compiling source code.

He gave a number of examples based on his specific laptop, which he indicated vary widely from one laptop to the next, showing the results of his power usage tests under different circumstances.

The results gave both power usage and performance figures for the different circumstances tested.

Among the statistics demonstrated was performance and power usage statistics for his dual­core laptop running with one core enabled versus disabled, and what effect a 5400 RPM hard drive had versus a 7200 RPM hard drive.

The speed of the hard drive had a huge performance boost for software development, but not noticeably anywhere else, and cost only a small penalty in power usage. Enabling the second core also cost little in extra power, but provides a significant performance boost. The biggest difference, he noted, was in LCD brightness. From brightest to dimmest setting on his LCD, the difference on his laptop's battery life was more than 25%.

He also compared Windows XP to Linux performance on his laptop, noting again that performance differences were different from one laptop to the next. On his laptop, DVD playing was noticeably more power efficient in Windows than in Linux. He credited this to WinDVD buffering the DVD instead of just reading it constantly as Mplayer did.

The day was capped off by a reception by Intel, at which there were no speakers, but the winners of an Intel­sponsored essay competition about Linux or open source were announced. The winner, David Hellier, received a very nice laptop for an essay on bridging the digital divide. The runner up received an iPod for his essay, "I can."

Originally posted to Linux.com 2006-07-20; reposted here 2019-11-24.

conferences foss 2278 words - permanent link - comments: 0. Posted at 13:43 on July 20, 2006

PostgreSQL Anniversary Summit a success

This weekend marked the 10th anniversary of PostgreSQL's posting as a public, open source project. To celebrate, the PostgreSQL project held a two­day conference at Ryerson University in downtown Toronto, Ontario, Canada.

The conference started with a keynote address by Bruce Momjian, one of the longest­serving and best known developers of the project, discussing why the conference is taking place, a bit of the history of PostgreSQL, and the future. Momjian started off his talk by announcing to laughs that the PostgreSQL patch queue is empty.

Bruce Momjian

Momjian called his role at PostgreSQL a tremendous honor, and says he does not know what the next ten years would bring for the project. He did predict that tools like PostgreSQL would become more popular.

Great days, Momjian philosophized, rarely announce themselves.

Weddings and graduations come with dates and invitations, but most other significant events just happen. Along the same lines, he noted that open source developers evolve into their developer roles. Many start with submitting a patch during a few hours free time. The contributions snowball and they eventually find themselves with full time employment as a result of their contributions.

PostgreSQL history

PostgreSQL started in the 1980s at the University of California, Berkeley, though most of the people from the era went on to get "regular jobs", says Momjian. In April of 1996, Marc Fournier sent an email to the postgres95 mailing list noting a number of major flaws with the software. Momjian described the state of development as being in maintenance mode. Fournier suggested in his email that, given time and room, it could become a useful project.

The discussion evolved and Fournier offered to host a development server for the project, allowing it to escape from Berkeley and become a modern open source project. Fournier noted in the discussion at the time that Postgres would need to move forward with the help of a few contributors with a lot of time. He commented that a lot of contributors with a little bit of time would not be equivalent.

Fournier's offer to host a CVSUP server came on July 8th, 1996, which the date of the conference commemorates. Pretty soon, work began toward an actual release, allowing the project to graduate out of maintenance mode.

Momjian went on to show the evolution of the PostgreSQL's Web site since 1997, from a comical logo showing an elephant smashing a brick wall to the current professional image of the organization.

Momjian showed a map of the world with markers everywhere he had been representing PostgreSQL, covering much of North America, Europe, and Asia, and commenting that he would soon be adding India and Pakistan to his list of countries. He concluded his keynote with what he termed a show and tell, showing CDs distributed at several points over the history of the project, as well as a Japanese PostgreSQL manual.

Following the keynote, Andy Astor, CEO of EnterpriseDB, got up to make a brief announcement, saying the company has grown to around 100 employees and is based entirely on PostgreSQL. "Thank you PostgreSQL," he says, "for giving me a job to go to." He announced that EnterpriseDB would be giving $25,000 to PostgreSQL as part of ongoing funding earmarked strictly for feature development.

The next talk was by Ayush Parashar of Greenplum on the topic of database performance improvements in the PostgreSQL­based Bizgres database. Parashar discussed various algorithmic improvements to Bizgres' sort and copy functionality. Using a bitmap index instead of a B­tree, he demonstrated, showed vast improvements in large database performance at low cardinality.

Parashar was asked when the improvements would be ported into the PostgreSQL tree. Another Greenplum employee answered, saying it would be after the code was further tested hardened.

PostgreSQL developer and conference organizer Josh Berkus noted that PostgreSQL 8.2 is going into feature freeze in just three weeks, and the Bizgres patches should be submitted as soon as possible to allow them to be integrated into the rest of the tree properly.

Lightning talks

The third session was a bit difficult for me to keep up with, but from what I understood of it, seemed quite fascinating. In the course of one­hour block, 10 speakers were given exactly five minutes each in what was termed a lightening talk. The first two were by employees of Voice over Internet Protocol (VoIP) specialist Skype.

Skype, says Hannu Krosing, runs on PostgreSQL internally. In order to scale to the massive size the company is working toward, it is working on a scalable database system which they're calling PL/Proxy.

According to Krosing, the project is soon to be open sourced. PL/Proxy works on the basic principle of splitting databases up by function, and then providing a simple way for these separate databases to be integrated.

The second part of the Skype lightning talk was about Skytools, by Skype's Akso Oja. These tools are queueing tools designed for hard drive failover and generic queueing.

The third lightening talk was by Hiroshi Saito, a member of the Japanese PostgreSQL Usergroup (JPUG) discussing an SNMP daemon, pgsnmpd, for PostgreSQL to allow operational situation surveillance for PostgreSQL databases.

The next in the series was about DBD::Pg, described by Greg Sabino Mullane as the integration of the best database and the best language ­­ PostgreSQL and Perl. The DBD::Pg module, Mullane says, makes do() loops very fast, using libpq, the PostgreSQL client library. He cited some other improvements, such as UTF­8 (Unicode) support.

He says future releases of DBD::Pg would be developed on Subversion or svk, in an effort to move away from CVS. He hopes, he added, that PostgreSQL moves to Subversion. He says he would also like to add Windows, Perl6, parrot, and DBI v2 support for the module.

The fifth of the ten sessionlets was by someone calling himself only "M", discussing PGX, PostgreSQL client support for Mac OS X. He explained that it is not intended to be a PostgreSQL admin tool, but rather a simple front end tool for PostgreSQL databases.

PGX allows non­blocking execution, which means the user can continue working with the program while it's off querying the database. Asked if it is possible to cancel a query, he was very succinct in saying that that capability had not yet been written. PGX is written in objective C, and allows the simultaneously querying of multiple databases with the same queries.

The sixth session was by Jean­Paul Argudo on the topic of Slony­I as a generic solution for aggregating data through multiple installations. Instead of replicating a master database a network of slaves, he explained. Users, he says, do not want to connect to each database separately. Slony­I uses a slave database to replicate a network of master databases.

The next in the series of brief discussions was on the topic of Red Had clustering by Devrin Gündüz of Command Prompt, in Turkey. Gündüz discussed PostgreSQL with Red Hat Cluster Suite. He described it as a redundant system for data, host, server, and power. According to Gündüz, there is no time for downtime. All it needs to work, he says, is hardware powerful enough to run Red Hat Enterprise Linux, and between two and eight servers with identical configurations.

The eighth sessionlet was by Neil Conway about TelegraphCQ, a Berkeley research project. The idea behind TelegraphCQ, he says, is to allow streamed queries. The queries, he says, are long lived, but the data is short­lived. Conway described an example of the use for such a system is for security monitoring sensory networks with action being taken based on the streamed query.

More information on this project can be found at telegraph.cs.berkeley.edu.

The ninth session was by Alvaro Herrera on the topic of Autovacuum maintenance windows. He explained that the system Is being based on cron, the task scheduler in most Unix­based systems. It allows maintenance windows to be specified so that database cleanup can be scheduled to be carried out by the database in off­peak hours for that database.

The final lightening session was presented by David Fetter about running a Relational Database Management System as an object within the database. He briefly discussed performance differences between object­based and relational databases.

The lightening sessions concluded the morning session of the first day. In the afternoon, Gavin Sherry and Neil Conway presented a pair of one and a half hour long back­to­back sessions called an Introduction to hacking PostgreSQL. After checking to see that nearly everyone in the room had at least a basic knowledge of the C programming language, they got into it.

You need to know C to hack PostgreSQL, Conway says. Fortunately, it's an easy language to learn.

PostgreSQL, he added, is a mature codebase and good code to help learn C from. Conway says Unix system programming knowledge is useful, but not necessary, depending on what part of PostgreSQL you want to hack on.

He gave a few technical pointers on debugging, such as ensuring that if there's a new bug in your code that you can't explain that you make clean and recompile from scratch to ensure everything is current.

He recommended ensuring that you have a good text editor, suggesting Emacs, to make your life easier.

He also recommended a number of tools to reduce the amount of development time wasted debugging, such as ccache, distcc, and Valgrind.

Conway and Sherry traded off for the rest of the presentation, providing an entertaining, easy to follow tutorial session. Among the things they warned about is avoiding idiosyncrasies in coding style that annoy people and waste time for no discernible positive gain.

Read the code around what you are patching or contributing and make your changes conform to the adjacent style.

Neil Conway

When writing your patches, especially ones that add features, send the idea to the project first to make sure it is one that would be welcome. They cited an example of someone who wrote a 25 thousand line patch that had to be rejected.

When determining what patches to write, they suggest asking yourself a number of questions, for example:

Is this patch or feature useful?

Is it a patch for the PostgreSQL back­end, or is it for the foundry or contrib/ directory?

Is it something that is already defined by the SQL standard?

Is it something anyone has suggested before? Check the mailing list archives and todo list.

Most ideas, they cautioned, are, in fact, bad. Also, they warned, make sure your submitted code is well commented, and tested properly.

The PostgreSQL conference will be having a code sprint following the main part of the conference. They recommended checking the code sprint wiki for ideas to cut your teeth on.

PostgreSQL doesn't like centralization

The last session of the first day was on the topic of fund­raising, hosted by Berkus. The discussion started with an introduction to the Japanese PostgreSQL Users Group (JPUG) by Hiroki Kataoka.

In Japan, Kataoka says, PostgreSQL is more popular than rival database MySQL, owing largely to earlier Japanese language support in PostgreSQL. JPUG started with 32 members and eight directors on July 23rd, 1999, Kataoka says. It now boasts 2,982 members, 26 directors, and a Japanese­language mailing list with around 7,000 subscribers.

He showed a map of Japan broken down into its 48 provinces, showing which had JPUG regional chapters or which otherwise had a PostgreSQL presence. Nearly half the provinces of Japan have a JPUG regional chapter. JPUG offers a number of activities and incentives, including PostgreSQL seminars, summer camps, a regular newsletter, PostgreSQL stickers ­­ and PostgreSQL water bottles for distribution at JPUG events. The JPUG, which is a registered non­profit in Japan, has numerous corporate sponsors.

Jean­Paul Argudo introduced the French PostgreSQL organization: postgresqlfr.org of which he is treasurer. It was started in 2004, Argudo says. Its Web site is powered by Drupal and the group has a presence on irc.freenode.net in #postgresqlfr. It's a registered non­profit under French law 1901. It has 50 members that pay €20 per year each. The Web site has some 2,000 users. The organization invites donations through its Web site but managed a mere €25 of Web donations in its first year.

The Web site, Argudo says, has around 1,400 pages of translated PostgreSQL documentation, information on migration, and translated news and information from the main PostgreSQL website. Work is in progress to produce books, he added.

Berkus introduced the rest of the world's organizations, noting that PostgreSQL currently deals with four non­profit organizations for fund­raising: JPUG in Japan, PostgreSQLfr in France, FFIS in Germany, and US­based Software in the Public Interest (SPI) for most of the rest of the world. PostgreSQL joined SPI after finding that creating their own 501(c)3 US non­profit organization was a very difficult and expensive proposition.

PostgreSQL, he says, does not like centralization.

Josh Berkus

Following these introductions, Berkus led a discussion on the nitty­ gritty of PostgreSQL's internal political structure, especially as it related to dealing with the non­profit organizations and organizing PostgreSQL's money.

Day two of the conference

The second day of the conference was far more intensely technical than the first, with a variety of talks by developers about their PostgreSQL sub­projects such as pgpool, pgcluster, Tsearch2, and other topics.

During the morning session, Peter St. Onge of the Department of Economics at the University of Toronto gave a talk on the role of databases in scientific research. St. Onge says that PostgreSQL's flexibility, extensibility, and speed, make it ideal for the research environment.

He discussed the unique needs of databases in research environments. Each lab, he says, is different.

Most currently operate on a Linux, Apache, MySQL and PHP (LAMP) platform, but research labs are switching to what he termed a Linux, Apache, PostgreSQL, and PHP (LAPP) platform.

St. Onge says his goal is to put data handling logic into the database back­end. From samples, to analysis, to mass spectroscopy, to analysis, to storage, to archiving, every step that a person has access to creates room for error. Every step that can be automated is an improvement.

A lot of data in different labs is stored in different units, he noted. Allowing basic functions within the database such as conversion of degrees Fahrenheit to degrees Celsius, for example, would allow better integration of data from multiple research facilities.

The 10th anniversary PostgreSQL conference went well, overall. Session time­limits were strictly enforced and technical problems were at a minimum, making for a smoothly run conference. All the talks were recorded, and most of them were recorded on video. Anyone interested in hearing any of the talks should be able to do so on the conference Web site in the next few weeks.

The material was largely highly technical, often way over my head, but the people were down to earth, and judging by the reactions of people around me, most understood and appreciated what was being says.

The conference operated on a budget of around $30,000, including travel stipends for many of the presenters.

Out of 90 people registered, Berkus says that only five failed to show. "Some due to specific issues (like health problems). Four "extra" people who had not registered due to some significant communications issues did show, so we were still slightly over capacity." A further 11 people were wait­listed and unable to attend as a result.

There is already discussion of a reprise of the conference. Says Berkus, "We're currently discussing the possibility of a conference next year, maybe even a 300­attendee user conference. We're somewhat undecided about whether to do it next year or the year after though, and where it should be located. A survey will go up on the conference Web site sometime soon ­­ if you're interested in the next PostgreSQL conference, please watch for it (use the RSS feed) and fill it out."

As for the long term consequences of the conference, Berkus says he's "hoping that it will lead to better coordination and communication in our really far­flung community. Having developers from so many different parts of the community face­to­face, even once, should help us overcome some barriers of language, distance and time zones.

"We should see some accelerated code development soon with people sharing ideas. For example, I think the various replication/clustering teams learned a lot from each other. I also think that, having met people in person, there will be subtle changes in the way we regard each other back on the mailing lists. A bunch of people didn't look or sound like I expected. I'm not sure what those attitude changes will be, but I'll find out soon enough."

This year's conference was organized by four PostgreSQL volunteers: Berkus, Andrew Sullivan, Peter Eisentraut, and Gavin Sherry. Next time, says Berkus, they're hiring professional help to organize any conferences. "Now," he says, "I'm finally going to get some sleep."

Originally posted to Linux.com 2006-07-10; reposted here 2019-11-24.

conferences foss 2812 words - permanent link - comments: 0. Posted at 13:37 on July 10, 2006

Wine, desktops, and standards at LinuxWorld Toronto

TORONTO ­­ The final day of the LinuxWorld Conference & Expo Toronto was a busy one. Novell Canada CTO Ross Chevalier delivered a keynote address on why this year is the year of corporate Linux desktop adoption ­­ as opposed to all those previous years that were ­­ the Free Standards Group executive director Jim Zemlin explained the importance of the Linux Standard Base, and developer Ulrich Czekalla gave an excellent presentation on the state of Wine.

Czekalla discussed the status of the Wine (Wine is Not an Emulator) project's Win32 API implementation for Linux, and gave his presentation using Microsoft PowerPoint running under Wine. Czekalla has been working with Wine since 1999, when his then­employer Corel needed it for WordPerfect and CorelDraw support in Linux. Czekalla is now an independent contractor, but he says he spends a lot of time working with Codeweavers on Wine.

He expressed the importance of the Wine project, and cited a study by the Open Source Development Labs (OSDL) that says the number one concern of companies looking at migrating to Linux is that applications need to be able to run in the new environment. The applications, he says, were not Microsoft Office or other things for which open source substitutes exist, but things like Visual Basic programs and other applications developed in­house, or niche applications, critical to the function of the business.

Wine started in 1993 initially to bring games to Linux. It is not an emulator, he stated, it is a free implementation of the Win32 API, intended to allow Windows executables to be run in the Linux environment. It is released under the GNU Lesser General Public License (LGPL). Czekalla says that Wine is developed by about 665 developers, with 30 to 40 active at any given time.

He explained the makeup of the layers between the operating system and the program being run in the form of a chart explaining at what level Wine runs on a Linux system. Wine runs in user space, he says, not kernel space, and therefore has no access to hardware or drivers. To the Linux machine, it is just an application. It runs between the kernel and the libraries needed to load the Windows executables, allowing Windows programs to find the Windows libraries they are expecting within Linux.

Theoretically, he says, applications in Wine should run just as fast as they do under Windows. As there is no emulation, there is nothing to slow the execution. However, Wine is still not considered a stable release, and has not yet been optimized, resulting in a lower performance. At the moment, Wine developers' efforts are focused on making it work, not on optimization. That, he says, will have to come later.

As work progresses on Wine, a lot of effort is put into making one particular application work at a time. Czekalla says that as problems with one application are fixed, many other applications will also become functional in Wine, as the features enabled for the targeted application are also needed by other applications. He cited the process of getting Microsoft Office 2003 to work under Wine as an example of this, calling the side­effects for other programs "collateral damage."

Czekalla says that while Wine is included with most Linux distributions, it still often needs manual tweaking to make it work Ulrich Czekalla pauses to take a question with different programs and, in spite of being in development for 13 years, is still in its 0.9 version. Wine releases always have bugs, he cautioned, and one should be prepared for odd crashes. Supporting Wine, he noted, requires someone with both Windows and Linux expertise.

To debug Wine, he says, requires the use of relay logs that can be as large as a gigabyte to see what happened within the application as it progressed, in an effort to figure out what might have killed it. He says to expect it to cost about $1,000 per bug to fix minor bugs, and between $4,000 and $20,000 to fix more difficult bugs. A Wine implementation can look great but become problematic and expensive.

Where is Wine going? There are some messy areas, says Czekalla. One of these is its Component Object Model (COM) implementation. After years of work, it is still not done and is at least six months away from being able to talk to a Microsoft Exchange server.

Right now about 70% of programs can install with Wine's implementation of the Microsoft Installer (msi.dll) library. In another year, Czekalla expects that number to be about 90%, though he pointed out that while most programs that install will work after being installed, there is no guarantee. One problem Wine has had is with the device­independent bitmap (DIB) engine. At the moment it supports only 24­bit color depth graphics, but many Windows programs still use only 16 bits. He described this as a problem with X.

Because Wine is a userspace program, and cannot see hardware, things such as USB devices (like a USB key) cannot be directly seen by programs running under Wine. An effect of this, he says, is that programs requiring Digital Rights Management (DRM) will not work under Wine and won't unless and until Linux gets native DRM support. Companies that use DRM, he said, are not willing to help. He gave the latest version of Photoshop as one example of a program that has implemented DRM and will not work in Wine.

Before taking questions, Czekalla pointed out that a lot of work could be saved by using Microsoft's own libraries (DLLs), but the problem is licensing. You need an appropriate Windows license for any libraries you borrow, though it was pointed out by an audience member that most people already have unused Windows licenses they can use from hardware they've bought that included Windows. Czekalla says that Internet Explorer is the Windows application used most often in Wine, because it's needed by many people for specialized functionality relating to their jobs that will only work in IE.

The first person to ask a question noted that Wine sounds like a pain in the neck, so why should you use it? It is a pain, agreed Czekalla, but a lot of applications do work. For migration, the choices are basically Wine or VMware. Wine, he noted, doesn't require a Windows license or the performance hit of emulation, while VMware does.

Another attendee asked, who is using Wine? Czekalla cited Intel, Dreamworks, Disney, and basically any company that is migrating from Windows. Wine is unstable, he says, but when rolling your own implementation it can be quite stable for your purposes.

Is Novell involved in Wine? Czekalla says he knows some people at Novell who are contributing, but as a corporation he doesn't believe so.

One person asked about Wine's relationship with TransGaming. Czekalla says that not much is happening there. A few years ago, TransGaming modified and sold Wine's code perfectly within the Wine license, which was then the X11 license ­­ which doesn't require redistribution of derivative code. But in response, Wine switched its license to the LGPL, which he surmised TransGaming didn't like as the project hasn't heard much since. Czekalla expressed the hope that TransGaming and Wine can eventually get back to working together, as there is a lot of duplication between the two.

Linux for the corporate desktop

Novell Canada CTO Ross Chevalier's keynote on the third day of LWCE Toronto was "2006 ­ The Year of Linux on the Corporate Desktop." He started by acknowledging that his keynote's title was a cliché, and asked why 2006? Why not? It's always the year of something, and he believes Linux really is ready to hit the corporate desktop.

He pointed to a Novell project called Better Desktop that focuses on desktop usability for Linux. The idea behind it, he says, is to develop the Linux desktop interactively with actual users of Linux desktops in workplaces rather than test environments to figure out what it is they need. The goal is to help companies transition to Linux desktops without any serious retraining costs. People want things to work as they're used to them working, and the project is working toward that.

Eye candy, he says, is important.

It holds people's attention and helps them learn. He returned to that theme later with demonstrations of desktop Linux eye candy, such as a three­ dimensional cube rendering of multiple desktops in X, allowing the cube to be spun on screen and windows to be dragged between and across desktops.

In order for Linux to be widely adopted, he says, Linux desktop quality and hardware support must be better than the as­yet­unreleased Microsoft Vista and Mac OS X 10.5 operating systems. Ross Chevalier tosses a SUSE gecko to someone who To that end, USB and FireWire answered a question devices, printers, and the like must just work when attached, as users expect from Windows and Mac OS, or adoption slows.

Searching desktop computers, he says, has to be easy. People with large hard drives and large numbers of files need to be able to find stuff easily. He pointed out that the number one and number two Web sites according to Alexa's ratings are Yahoo and Google, respectively. Search, he says, is important. Password­ protected office files have to be as easy to use in OpenOffice.org as they are in Microsoft Office, he says, or adoption doesn't happen.

After listing the requirements needed for adoption, he started a demonstration on his SUSE Linux laptop to show that all the capabilities he listed as being required do indeed exist. He concluded that we are up to the point where a Windows user can go to Linux and feel comfortable.

The use of the Linux Standard Base

Jim Zemlin of the Free Standards Group headlined a session entitled "Open Source and Freedom: Why Open Standards are Crucial to Protecting your Linux Investment." The Free Standards Group is a California­based non­profit organization, Zemlin says, with a broad range of members including "basically everyone but Microsoft." Membership spans the globe, he says, with many members in many countries around the world.

The Free Standard Group's main focus is the Linux Standard Base (LSB), which is an ISO standard. The goal of LSB is, according to Zemlin, to prevent Linux fragmentation. A common misconception about open source, he says, is that using open source software prevents the problem of vendor lock­in. He cautioned that this is not true. Open source is a development methodology and choice is not guaranteed.

Zemlin pointed out somewhat ironically that many companies insist on having one throat to choke while complaining about vendor lock­in. The single throat you are choking, he noted, is the vendor that has locked you in. With open standards he says, you get the choice of throats to choke.

The Linux Standard Base offers a standard for the installation, libraries, configuration, file placement, and system commands, among others. Zemlin says this provides independent software vendors (ISV) an easier way to develop Linux software, as they know where to find everything and can expect all Linux systems to have the same basic structure. Developing around the Linux Standard Base means that ISVs need only test their product against one distribution, and not have to set up several test cases, saving time and effort for the quality assurance side of development.

The FSG being controlled by open source vendors means that its release cycle for an updated standard base is about 18 months, comparable to most distributions' own upgrade cycle. As a result of this constant updating, the ISO standard must also be updated regularly. The standard must move with the ever­fluid open source community it is attempting to standardize. All the major commercial Linux distributions, Zemlin says, are LSB­compliant.

In retrospect

This year's LinuxWorld Conference & Expo Toronto saw a far improved set of speakers from last year, with a noticeable increase in the useful speakers to marketing droid ratio. Conference organizers state that the conference saw more pre­registrations this year than in previous years, but final numbers on actual attendance have yet to come out.

Originally posted to Linux.com 2006-04-27; reposted here 2006-04-27.

conferences foss 2034 words - permanent link - comments: 0. Posted at 13:22 on April 27, 2006

(RSS) Website generating code and content © 2006-2019 David Graham <cdlu@railfan.ca>, unless otherwise noted. All rights reserved. Comments are © their respective authors.