The world according to David Graham
All stories filed under conferences...
Real World Linux 2004, Day 1: A real world experience
Real World Linux 2004, Day 2: Keynotes
Real World Linux 2004, Day 3: The conclusion
OS conference endures PowerPoint requirement on Day 1
Red Hat, Microsoft clash at open source conference
Creative Commons highlights final day of OS conference
Ottawa Linux symposium offers insight into kernel changes
Linux symposium examines technicalities of upcoming Perl 6
OLS Day 3: Failed experiments, LinuxTiny, and the Linux Standard Base
Ottawa Linux Symposium day 4: Andrew Morton's keynote address
LWCE Toronto: Day 1
LWCE Toronto: Day 2
LWCE Toronto: Day 3
Ottawa Linux Symposium, Day 2
Ottawa Linux Symposium, Day 3
Ottawa Linux Symposium, Day 4
Security and certification at LinuxWorld Toronto
Wikis, gateways, and Garbee at LinuxWorld Toronto
Wine, desktops, and standards at LinuxWorld Toronto
PostgreSQL Anniversary Summit a success
First day at the Ottawa Linux Symposium
Day two at OLS: Why userspace sucks, and more
Day 3 at OLS: NFS, USB, AppArmor, and the Linux Standard Base
OLS Day 4: KroahHartman's Keynote Address
DebConf 7 positions Debian for the future
Day one at the Ottawa Linux Symposium
Kernel and filesystem talks at OLS day two
Thin clients and OLPC at OLS day three
OLS closes on a keynote
Ontario LinuxFest makes an auspicious debut
Ottawa Linux Symposium 10, Day 1
OLS: Kernel documentation, and submitting kernel patches
OLS 2008 wrap-up
Displaying the most recent stories under conferences...
OLS 2008 wrap-up
Day 3 of this year's Ottawa Linux Symposium featured a number of sessions, most notably a keynote address by Ubuntu founder and space tourist Mark Shuttleworth, who called for the greater Linux community to start thinking about discussing syncronicity, his term for having major software releases synchronised. The conference wrapped up on Saturday with some final interesting sessions and statistics.
Shuttleworth was, per OLS tradition, introduced by 2007 keynote speaker James Bottomley, who showed a graph of Shuttleworth's Linux kernelrelated maililng list contributions over the years, noting three years in which nothing happened the first in which he received half a billion dollars, the second in which he was "not on planet Earth," and the third in which he was busy founding the Ubuntu Linux distribution.
Shuttleworth's talk was called "The Joy of Syncronicity." It was a visionary statement about how to grow the Linux market for everyone and reduce software development waste. As the world changes, so too must we, he said.
Development has to be driven by three major factors, Shuttleworth said: Cadence, Collaboration, and Customers.
Cadence is the pace of release of any given project. It is a regular, predictable time at which the next version will be released, a release cycle tied to the calendar. For example, Ubuntu is targeting a sixmonth release cycle for point releases with a predictable twoyear release for major releases, he said. GNOME is good at this, though its initial attempts were met with some difficulties. It is now on a sixmonth cycle, and KDE is beginning to explore the idea.
Syncronocity, for Shuttleworth, is all about collaborating the cadences of several projects for the benefit of the customers. If the Linux kernel, gcc, KDE, and GNOME, to start, were always at the same version in each coreleased Linux distribution, Shuttleworth argued, it would reduce code waste and help grow the Linux market for everyone.
The point is simple. In Shuttleworth's vision, distributions would all be on the same versions of major software, but would always retain their other traits, differentiating them from each other and keeping the diversity of Linux distributions as lively as it is today. The predictability of releases would help all around.
Kernel developers, he argued, would have an easier time developing if they knew exactly which versions of the kernel would be used when by what distributions. The same would apply to all aspects of the open source community.
Shuttleworth expressed hope that such a predictable, marketingfriendly setup would grow the total Linux market for every distribution and market.
Sustainable Student Development in Open Source
Earlier in the day I attended an interesting talk by Chris Tyler of Seneca College who discussed a strategy the school has developed to educate students in open source technology and development.
Many students get involved in open source software and the community on their own, but do it outside of their coursework. Seneca College, from Tyler's explanation, has been looking to incorporate open source development directly into the curriculum. Under their system, senior year college students at Seneca are offered a list of open source projects seeking help to choose from to contribute to as their class projects.
Most of the efforts so far have been within the Mozilla project. One thing Tyler noted is that students are not used to large projects. Thousands or tens of thousands of lines of code is something that students can grok, and understand right through. But once they start dealing with larger projects, like Mozilla, which are in the millions or tens of millions of lines, there is too much code for any one person to know right through.
The other side to that is that faculty can also be overwhelmed. It is critical, Tyler noted, that faculty involved in this program be both familiar with the academic environment, as professors necessarily are, and integrated with and active in the open source community. Without that intrinsic understanding of the community, faculty members cannot be expected to do well. To that end, Tyler commented that other institutions have contacted his department about using the curriculum, but they are advised that it is not the curriculum that makes the project a success, but that integration between the faculty and the open source community.
A significant difference that Tyler noted between open source projects and normal assignments for students is that in a typical assignment, the student is responsible for the complete coding project, from design to implementation. In an open source project, they can be using code that already exists as part of a larger project and is as much as 20 years old.
While Tyler indicated that open source was clearly not for all students, some of whom are not happy working on group projects in that way, he said the successes far exceeded the failures. He cited a number of examples, one of which was of a student who took on the challenge of documented a previously undocumented API. This led to the question of how such assignments are graded. Tyler explained that the marking is done as an assessment of the contribution to the open source project and the accomplishment of the student's stated goals. Thus it does not necessarily have to be a coding project to be a successful project.
Another example he cited was a student who developed an animated Portable Network Graphics implementation which he called apng. It was less cumbersome than MMG, the PNG project's implementation of the same task, and has been merged into Firefox, Opera, and soon Microsoft Internet Explorer, although it was rejected by the PNG group itself.
The course requires real contribution to real, existing open source projects as a normal new contributor. A critical component of the course is to encourage the developers and the students to interact on an ongoing basis, preferably actually meeting in person at some point, during the course. As one example, he noted that students and developers of Mozilla interact on an ongoing basis in the #seneca channel on irc.mozilla.org.
Tyler said that the course works within an open source philosophy in its own right. The course notes and outline are posted on wikis, the projects are developed with them, and coursework is submitted through a developer blog aggregator, with each student required to create an aggregated blog to cover his progress. This setup also allows other members of the projects involved to keep up the accuracy of the course and project information.
More information about his project can be found on Tyler's own blog.
Peace, love, and rockets
Worth brief mention is Bdale Garbee's talk on using open source and open hardware to build a useful telemetry system for model rockets. Garbee spent some time outlining the model rocket hobby and explaining the shortcomings of altimeters and accelerometers currently available, namely that they are not easily hackable. He said he has been told that his main hobby is turning his other hobbies into open source projects.
The fourth and final day of Ottawa Linux Symposium started for me with an entertaining trip down memory lane by D. Hugh Redelmeier in a talk entitled "Red Hat Linux 5.1 vs. CentOS 5.1: Ten Years of Change." Redelmeier took Red Hat Linux 5.1, released in June 1998, and compared it on the same computer to CentOS 5.1, a free version of Red Hat Enterprise Linux 5.1 released late last year. He chose the two systems because of the time separation, direct lineage, and coincidentally numbered versions of the two operating systems. He compared them by dual booting them on a 1999built Compaq Deskpro EN SFX desktop machine with 320MB of RAM, upgraded from the original 64MB, and a 120GB hard drive, upgraded from the original 6.4GB drive that came with the machine.
Redelmeier described installing two versions of a Linux distribution nearly a decade apart in age on the same hardware as a "bit of a trick." For example, he said, Red Hat 5.1 only understood hard drive geometry as CHS Cylinders, Heads, Sectors. How many people remember CHS, he asked? The standard bootloader at that time, LILO, had to be installed on a cylinder below 1024. On a 120GB drive, that meant ensuring that /boot showed up in the first 8.5GB of the drive. Except that Red Hat 5.1 had not yet introduced the concept of /boot as a separate partition that did not come until 5.2 and so the root partition needed to be in the first 8.5GB of the drive a relic of old AT BIOSes.
Among his other surprises were that CentOS 5.1 and Red Hat 5.1 could not share a swap partition. Red Hat 5.1 could not read the CentOS swap partition without running mkswap on boot, which is not a normal boot procedure. Red Hat 5.1, he noted, was limited to a 127MB swap partition anyway. That version of the distribution could be installed in 16MB of RAM, so 127MB of swap seemed like an awful lot at the time. The computer Redelmeier chose did not have an optical drive, and so he was forced to install CentOS 5.1 using PXE boot. CentOS also requires a yum update once installed, which he described as very slow on that machine.
His observations from the process include noting that GRUB is generally better than LILO, as he had an opportunity to reexperience such entertainment as "LILILILILI..." as a LILO boot error.
Redelmeier indicated that he has been using Unix in some form or another since 1975. Considering that, he said, the Red Hat 5.1 Unix environment is "pretty solid." There were a "few stupidities" he said, "like colour 'ls'." Looking at it now, he noted, FVWM, the window manager in Red Hat 5.1, had an old feel to it. Another ageold piece of software, xterm, he said, was still mostly the same, except that in Red Hat 5.1, xterm had been improved slightly to use termcaps which broke it when you tried to use it remotely from, for example, Sun OS.
Red Hat 5.1 did not come with SSH; at the time it still had to be downloaded from ssh.fi. To log into the machine, he used rlogin with Kerberos. OpenSSH requires openSSL, and a newer version of Zlib than was available for Red Hat 5.1, something he was not inclined to backport. Redelmeier warned of "cascading backports" when trying to use newer software on such old installs.
Security, too, is quite bad in the original Red Hat 5.1, he commented, but the obscurity factor largely made up for it.
Another lesson he learned in the process of comparing the installs is about "bitrot." Redelmeier commented that the original pressed CDs that came in the box still worked fine, but his burned update CD had bonded to the CD case and was no longer usable. Avoid bitrot, he cautioned, by actively maintaining stuff you care about.
Issues in Linux mirroring
John Hawley, admin for the kernel.org mirrors, spoke in the afternoon about "problems us mirror admins have to scream about."
Not every mirror has 5.5 terabytes of space to offer the various distributions that need mirroring, Hawley said. Some mirrors only have as little as one terabyte to offer. Yet in spite of this, many distributions leave hundreds of gigs of archival material on mirrors. Hawley asked that distributions make it optional to mirror admins whether or not they take these archives. Fedora and Mandriva alone, he noted, use up fully half of his mirror space, while Debian, at a paltry half terabyte, has cleaned up its act on request. He warned that if other distributions don't start reducing their mirror footprint, mirrors will no longer be able to carry them.
Disk cache is a major constraint on mirrors, Hawley warned. Disk I/O is the most significant part of any mirror operation. No mirror can keep up, he noted, with 2,000 users downloading distributions at the same time if the servers are not able to cache the data being sent out. Cache runs out, I/O use goes up, disk thrashing begins, the load goes up, and it is nearly impossible to get it back under control without restarting the HTTP daemon.
Keep working sets as small as possible, Hawley asked. His servers have 24GB of RAM, yet a distribution today can be 50GB. To be able to distribute the whole distribution means that some of it necessarily has to be gotten from disk at any given time, since only half of it can fit in RAM. Add multiple releases at the same time, and pretty soon mirrors are no longer able to keep up.
Hawley asked that distributions coordinate not to release at the same time. "I don't care what Mark said it's bad!" Hawley exlaimed, in reference to Mark Shuttleworth's keynote. Last year, Hawley noted as an example, Fedora, openSUSE, and CentOS all released within three days of each other, swamping mirrors. When that happens, he said, "we are dead in the water." Please, he said, when doing releases, coordinate with other distros so as not to release the same week.
Hawley strongly suggested that distributions need to learn to keep mirror operators in the loop on release plans. Sometimes, he said, the only way he knows that one of the distributions he is mirroring has released a new version is by the spike in traffic on his mirrors. When a distribution is preparing to release, he suggested sending repeated loud, clear emails to mirror admins to warn them of this fact.
And then Hawley really got started. Hawley said he does not know of many admins who like BitTorrent.
Users think it's the best thing since sliced bread, and distributions and mirror admins are answering that demand. But Hawley would rather that users be informed as to what is wrong with BitTorrent.
So why is BitTorrent considered harmful?
The original idea, Hawley said, is to allow multiple users to download from the other people downloading.
This, he said, is great for projects with large datasets but small numbers of downloads. But once the volume rises, BitTorrent "falls flat on its face." Every client needs to talk to the tracker to get the source of its next segment and check the checksums of what it has. The tracker itself becomes a single point of failure, and a bottleneck to the download. There's no concept in BitTorrent of mirrors versus downloaders, as everyone takes on both roles. This also means that any user of BitTorrent sinks to the lowest common denominator. If, for example, in your cloud of downloaders, there is a 56K modem user, that user can slow down the rest of the users' downloads considerably as they wait to get chunks out of that modem.
BitTorrent, Hawley said, is complex for everyone. It adds manual labour to set it up to work on the mirrors, it is slow to download, and he explained that BitTorrent as a whole cannot even keep up with a single major mirror.
With graphs to back him up, Hawley showed that in the first week of Fedora 8's release, the total number of downloads by BitTorrent of the release across all sources was roughly equivalent to the total number of downloads from only the kernel.org mirror for BitTorrent, yet some 25% of all bits traded in BitTorrent for Fedora 8 still came from the kernel.org mirrors.
Among its problems, BitTorrent is a largely manual process to set up for mirror admins. BitTorrent does not inherently have a way to automatically detect and join existing torrents, nor does it have an easy way to create a torrent from existing data. Aside from that, its chunk approach to data distribution causes disk thrashing on the servers. Per download, he said, BitTorrent is 400 times more intensive than a single direct download from a mirror, largely on the client side, because of its weird disk seeks.
With a Web server, the server can simply use a kernel function called sendfile() to pick up a file and send it. With BitTorrent, a file is divided into small chunks that have to be seeked for constantly and distributed.
If BitTorrent continues to thrash mirrors, he warned, mirrors will no longer participate.
Peertopeer distribution for Linux distribution releases has a role, he said, but BitTorrent is not the answer.
This marks the tenth consecutive year of the Ottawa Linux Symposium. Organisers say that 600 people attended this year, in spite of the weak US economy and the scheduling conflict with OSCON, which scheduled itself for the same week as OLS's traditional time slot and has again for next year. Some attendees at OLS, including keynote speaker Mark Shuttleworth, attended part of each conference to reconcile this conflict.
The Ottawa Congress Centre, where OLS has taken place for the past 10 years, is being torn down and rebuilt over the next three years. As a result, OLS is "going on the road" and will take place in Montreal at the Centre MontRoyal next year, with dates to be determined.
As per tradition, Craig Ross, one of OLS's two key organisers along with Andrew Hutton, gave the closing announcements and statistics at the end of the last day. In 10 years, there have been approximately 5,000 attendees, 850 talks, 23 calls from embassies, 11 calls from authorities, 2 attendees found asleep in the fountain at the closing reception (alcohol is provided, in case you were wondering), and some 50,000 beverages consumed.
And of course, Ross had to post a slide showing Tshirt sizes issued through the conference slide photographed by Yani Ioannou showing the, ahem, enlarging Linux community.
Originally posted on Linux.com 2008-07-28; reposted here 2019-11-22.
words - whole entry and permanent link. Posted at 15:35 on
July 28, 2008
OLS: Kernel documentation, and submitting kernel patches
The second of four days at the 10th annual Ottawa Linux Symposium got off to an unusual start as a small bird "assisted" Rob Landley in giving the first talk I attended, called "Where Linux kernel documentation hides." The tweeting bird was polite, only flying over the audience a couple of times and mostly paying attention.
Landley did a sixmonth fellowship with the Linux Foundation last year to try to improve the Linux kernel's documentation. He explained that it was meant to be a year, but after six months he had come to some conclusions about how documentation should be done, which he said the Linux Foundation both agreed with and did not plan to pursue, and so he went back to maintaining his other projects.
Where, asked Landley, is kernel documentation? It's in the kernel tarball, on the Web, in magazines, in recordings from conferences like OLS, in man pages, on list archives, on developers' blogs, and "that's just the tip of the iceberg." The major problem is not a lack of documentation, he said, but that what is out there is not indexed.
The challenge in providing useful documentation for the Linux kernel, Landley said, is therefore to index what is already out there. When a source of some documentation for some item gains enough traction, it becomes the de facto source of documentation for that particular subsection of the kernel, and from then on gets found and maintained. But there is a big integration problem, as such sources of documentation are scattered around.
It is hard enough for Linux kernel developers to keep up with the Linux Kernel Mailing List, Landley noted, let alone to read all the other lists out there and keep track of the evergrowing supply of documentation. Putting all the kernel documentation found around the Internet together is itself a fulltime job. Jonathan Corbet of LWN, he noted, is good at this already, but there are several people doing it each their own way in their own space.
The Linux kernel developers' blog aggregator, kernelplanet.org, and other aggregators offer a huge amount of information as well, Landley noted. But he said we need to aggregate the aggregators. Google is inadequate for the challenge, he said, as it can take half an hour to find some pieces of information, if you can find them at all, and it only indexes Web pages, not, for example, the Documentation directory in the kernel source tarball.
So what are the solutions? Landley explained how he set up a new page on kernel.org called kernel.org/doc, where all the aggregated documentation is stored in a Mercurial archive and is automatically turned into an indexed Web page. Adding information to this database is a task that requires a lot of editing, Landley said, quoting Alan Cox: "A maintainer's job is to say no." As the maintainer of the kernel documentation on kernel.org, Landley sees himself as mainly responsible for rejecting submissions, as one would with kernel patches. As a tree in its own right, the documentation has to be kept up to date and managed.
Asked why he does not use Wikipedia rather than the kernel.org/doc system, Landley explained that on Wikipedia, you cannot say no, so there's no real quality control on the information available, and it lacks a rational indexing system, which is still the core problem.
Landley said his sixmonth term with the Linux Foundation ended 10 months ago. While he is still responsible for the section, he no longer has the time to maintain it himself and stated that what is really needed is a group of a dozen or so dedicated volunteers under a maintainer to handle kernel documentation as its own project.
On submitting kernel features
Another interesting but somewhat difficult to understand talk on this day was by Andi Kleen, who gave a brief course entitled "On submitting kernel features."
Kleen, a selfdescribed "recovering maintainer," asked, "Why submit patches?" then said people submit kernel patches for a variety of reasons. The code review involved in submitting patches usually improves code quality. Including code in the kernel allows it to be tested by users for free. Having it in the kernel instead of separate keeps it away from user interface conflicts. And you get free porting service to other architectures if your feature becomes widely used. Getting code into the kernel, he said, is the best way to distribute a change. Once it is in the mainline kernel, everyone uses it.
So how does one go about doing it?
Kleen outlined a few easy steps for submitting features to the kernel, and included two case studies to explain the points. The basic process as he explained it is:
You, the developer, write code and test it, and submit it for review. You fix it as needed based on the feedback from the review. It gets merged into the kernel development tree by the maintainer responsible for the section of the kernel that you are submitting a patch for. It gets tested there. And then it gets integrated into a kernel that is then released.
The basic things to remember when submitting code: the style should be correct and in accordance with the CodingStyle document in the kernel documentation directory found in the kernel source tarball. The submitted patch should work and be documented. You should be prepared for additional work relating to the code as revisions and updates as needed. And expect criticism.
Kleen compared submitting kernel code to submitting a scientific paper to a journal for publication. Getting attention for your kernel patch means selling it well. There is generally a shortage of code reviewers, and the maintainers are often busy. In some cases, you could be submitting a patch to a section of the kernel that has no clear maintainer. So selling your patch well will get you the reviewers needed to get started on the process.
You have to sell the feature, Kleen said, and split out any problematic parts where possible. Don't wait too long to redesign parts that need it, and don't try to submit all the features right off the bat. As his case study, he discussed a system he wrote called dprobes. After a while of it not going anywhere, Kleen resubmitted the patch as kprobes with a much simpler design and fewer features, and the code became quickly adopted.
There are several types of code fixes one can submit, Kleen said. The clear bugfix is the easiest to do and sell. He advised against overdoing code cleanups, because bugfixes are more important. And for optimisations, he suggested asking yourself a number of questions: how much does it help? How does it affect the kernel workload? And how intrusive is it?
In essence, Kleen said, a patch submission is a publication. The description of the patch is important.
Include an introduction. If you have problems writing English, get help writing the introduction and description, he advised.
Over time and patches, the process becomes easier. When a kernel maintainer accepts a patch from you, it means he trusts you. The trust builds up over time. Kleen recommended making use of kernel mailing lists to do development on your patches, and suggested working on unrelated cleanups and bugfixes to help build trust.
Kleen's presentation is available on his Web site.
Originally posted on Linux.com 2008-07-25; reposted here 2019-11-22.
words - whole entry and permanent link. Posted at 15:39 on
July 25, 2008
Ottawa Linux Symposium 10, Day 1
The tenth annual Ottawa Linux Symposium kicked off Wednesday in Canada's capital, just a few blocks from the country's parliament building, in a conference centre in the midst of being torn down. The symposium started with the traditional State of the Kernel address, this year by Matthew Wilcox. Among the dozens of talks and plenaries held the first day was kernel wireless maintainer John Linville's Tux on the Air: the State of Linux Wireless Networking.
The Kernel: 10 Years in Review
Matthew Wilcox gave the traditional opening address this year in place of Linux Weekly News's Jonathan Corbet, who has done it for several years and has been a staple of OLS's proceedings. Wilcox introduced himself as a kernel hacker since 1998, whose work history includes stints at Genedata, Linuxcare, and Intel, where he is today.
Getting down to business after a few minutes battling with both the overhead projector and the room's sound system, Wilcox gave a brief history of Linux kernel development. As most regular Linux users know, the kernel 2.6 tree has been around for quite a few years. This is a change from the old way, he explained, where stable releases came out as even numbers such as 2.0.x, 2.2.x, and 2.4.x, with development releases coming out as 2.1.x, 2.3.x, and 2.5.x. Minor kernel releases came out every week or so, with a new stable release approximately every three years. With the 2.6.x kernel, each version is itself a stable release, coming out approximately every three months, each with somewhat less dramatic changes than earlier major releases had.
With the history lesson over, Wilcox explained how kernel development itself is done. With each kernel version, there is a brief merge period, in which tens of thousands of patches are submitted through git, a large and scalable source management utility written specifically for the Linux kernel by Linus Torvalds. The purpose of using git, he explained, is primarily so that everyone is using the same tool. Mercurial, he said, is a comparable tool, but git is preferred for uniformity. CVS, he said, is not scalable enough for the task.
Why should the kernel be changed at all, Wilcox asked. New features are needed, new hardware is made, and new bugs are found. There is always need for change. He noted that 10 years ago, multiprocessor systems were expensive and poorly supported by Linux, but now it is difficult to get a computer without multiple processor cores. And now Linux runs on everything from 427 of the top 500 supercomputers in the world to a watch made by IBM. Over the last 10 years, Wilcox noted, from kernel 2.3 to 2.6.26, Linux has gone from supporting approximately six hardware architectures to some 25.
As an example of Linux's changes over the last decade, Wilcox said that in kernel 1.2, symmetric multiprocessing was not supported at all. In kernel 2.0, SMP support began, with spin locks being introduced in 2.2 to allow multiple processors to handle the same data structures. Kernel 2.4 introduced more and better spinlocks, and by kernel 2.6, Linux had the ability to have one processor write to a data structure without interrupting another's ability to use it.
After a discussion of details about improvements in the Linux kernel from wireless networking to SATA hard drive support to filesystem changes to security and virtualisation, Wilcox wrapped up his presentation when he ran out of time with a summary of improvements in recent kernel releases and what we can expect in the near future.
Since Corbet's talk a year ago, Wilcox said, kernel 2.6.23 introduced unlimited command length. 2.6.24 introduced virtual machines with antifragmentation. 2.6.25 added TASK_KILLABLE to allow processes in uninterruptable sleep to be killed, although he said it is still imperfect and some help is needed with that. 2.6.26 added readonly bind mounts.
On the table for the upcoming Kernel Summit, Wilcox said, are asynchronous operations. The future holds Btrfs, better solid state device support, and SystemTap, among other things.
Wireless networking support in Linux
The next session I attended was called Tux On The Air: The State Of Linux Wireless Networking, by John Linville of Red Hat, who introduced himself as the Linux kernel maintainer for wireless LANs. Wireless, he admitted from the outset, is a weak spot in Linux, quoting Jeff Garzik: "They just want their hardware to work."
Linux wireless drivers typically used to have the wireless network stack built right into the drivers, meaning large amounts of duplication, and causing what Linville called "full MAC hardware" to appear to be normal Ethernet devices to the kernel. Full MAC hardware, he explained, is wireless hardware with onboard firmware. Many recent wireless devices have taken after Winmodems, he said, and provide only basic hardware to transmit and receive, with all the work done by the driver in kernels. This he called "SoftMAC."
After a couple of other approaches were tried, SoftMAC wireless device drivers started using a common stack called mac80211. This proved popular with developers and eliminates a lot of duplicated code. Most new wireless drivers in Linux, he said, now use mac80211. mac80211 was merged into the Linux kernel tree in 2.6.22, with specific device drivers using it following in subsequent versions.
Linville showed a chart of some wireless devices and whether or not they used mac80211, and cited vendors for good or bad behaviour in cooperating with the Linux community to get drivers out for their hardware. Good corporate citizens, he said, include Intel and Atheros, while Broadcom, concerned with regulatory issues around opening up its hardware, refuses to help in any way. He repeatedly suggested that we vote with our dollars and support wireless device vendors who make an effort to support Linux.
The regulatory issues, Linville said, are, unfortunately, "not entirely unfounded." Regulations vary by jurisdiction, but the main concern is about allowing device operation outside of the rules to which they were designed. Vendors are expected to ensure compliance with regulations on pain of being shut down. Some vendors, he noted, proactively support Linux in spite of this.
Regulators, he said, are not worried so much about people using wireless devices slightly outside of their normal bounds, but rather about people using wireless devices to interfere with other systems, such as aviation systems. As long as vendors keep such interference difficult to do, they believe they are within compliance with the regulators, and so keeping drivers closed source and effecting security through obscurity helps them achieve that.
Wireless driver development represents a busy part of overall Linux kernel development, Linville said. He noted that his name as the signoff on patches essentially represents wireless development in the kernel.
In kernel 2.6.24, 4.3% of merged patches were signed off by him, putting him in fifth place. In kernel 2.6.25, this was up to 5.0%, and by kernel 2.6.26, Linville signed off on 5.6% of all merged kernel patches, bringing him up to fourth place.
More information about wireless support in Linux can be found at linuxwireless.org.
That's some of the best of the first of four days at the 10th annual Ottawa Linux Symposium. More tomorrow.
Originally posted on Linux.com 2008-07-24; reposted here 2019-11-22.
words - whole entry and permanent link. Posted at 16:13 on
July 24, 2008
Ontario LinuxFest makes an auspicious debut
The firstever Ontario LinuxFest, unapologetically modeled on Ohio's conference of the same name, took place on Saturday at the Toronto Congress Centre near the end of runway 24R at Toronto's international airport. With only a few sessions and a lot of quality speakers, the organisers kept the signal tonoise ratio at this conference as good as it gets.
The charismatic Marcel Gagné gave the first talk I attended. Gagné started his talk on what's coming in KDE 4.0, which is expected to be released in midDecember, by stating that KDE 4.0 is a radical departure from existing desktop environments, including current versions of KDE.
KDE 4's revamping is based on the premise that user interfaces are not natural or intuitive. We learn to work around the interface instead of designing interfaces that work around us, Gagné said. The best way to evolve desktops going forward is to make them more organic. They should work the way you want them to work.
He then demonstrated KDE 4.0, cautioning that it remains a work in progress. He spent a half hour showing us the various new features already prepared in KDE 4.0. If you like eye candy, KDE's new desktop will keep you happy.
The midday keynote address on Linux's past, present, and future came from Theodore Ts'o, the first Linux developer in North America, who joined Linus in developing the operating system in 1991.
Noting the operating system's earliest history, Ts'o jokingly described Linux as a glorified terminal emulator. He asked his audience to name the dates when Debian passed its constitution, Red Hat released version 3.0.3, the release date of the Qt Public License, and when it was that Richard Stallman requested Linux be renamed to Lignux. Amid several guesses, he said 1998, 1996, 1998, and 1996 respectively. Much of this progress, he said, is already 10 years ago.
Ts'o gave a brief history of Linux: In July 1991, Linus wrote the very first version. In 1992, X was added and the first distro was created. In 1994, Linux 1.0 was released and for the first time included networking support. A year later 1.2 came out; it was the first kernel to have multiplatform support, with the addition of SPARC and Alpha. In 1996, multiCPU (SMP) support was added. In 1997, Linux magazines began to show up, and the user base was estimated at around 3.5 million people. In 1998, Linux received its first Fortune article, gaining corporate attention. In 1999, Linux 2.0 came out, and its user base had risen to an estimated 7 to 10 million people. That year also saw the dotcom bubble and the rise of the Linux stocks such as Red Hat and VA Linux, the latter of which gained a record 698% the day it began trading. Briefly, Ts'o said, VA had a larger market cap than IBM.
In 2000 the slump began, but lots of cool work was still being done on Linux. By 2001, he said, Linux was used by an estimated 20 million users. In January 2003 Linux 2.6 was released and a new release model was adopted. Linux began to be taken for granted by corporations. 80% of Sun Opterons were running Linux rather than Solaris. And, around then, SCO starts its lawsuit.
Today, Ts'o said, we are into our second round of 2.6 kernelbased enterprise Linux distributions. There's more competition. Vista's unpopularity has resulted in Microsoft extending Windows XP's life by an extra six months. Its failure is an opening for Linux. Sun is starting to get open source too, open sourcing Java.
Sun, Ts'o said, is releasing Solaris under a GPLincompatible license, and 95% of Sun's code is still developed by Sun. Sun is worried about quality assurance, he said, but so is the free software community. Sun, he said, used to have a policy that if your code commits broke build three times, you were fired. It's hard, he commented, to move from that environment to open source.
Ts'o's employer, IBM, is by contrast a small part of the community, he said, but is happy with that. It takes the attitude that it doesn't have to do everything itself.
And SCO has declared bankruptcy, Ts'o said to widespread applause in the room. But that said, open source software faces legal issues, particularly with the US Digital Millenium Copyright Act (DMCA) that the US government has been attempting to export, most recently to Australia. Now there is a patent troll suing Novell and Red Hat. Trade secrets are a problem, with the DMCA's limits on reverse engineering, he says. Our defence as a community, he warned, is to get involved in the political process Next, Ts'o waded into the debate between the GNU's GPLv2 and GPLv3.
GPL version 3, he said, is now three months old, and it is clear that GPL version 2 is not going away. The result of this is that there are now two separate licenses. Kernel developers, he said, are not fond of version 3. He summarised the debate in two words: "embedded applications." Linux is in data centres and pretty much owns the Web serving market, Ts'o said. Embedded systems and desktops are Linux's future markets.
Ts'o said he's on the kernel side of the debate. From the GPLv2 view, he explained, we want embedded developers to use and contribute back to Linux so we can all do the stone soup thing and all make it good.
From the GPLv3 view, he went on, the mission is to allow all end users to be able to use embedded appliances with their own changes in the systems. TiVo, for example, has checksums to make sure changes are not made and this is a violation of freedoms. The rebuttal to this, he continued, is that if we use v3, appliance vendors will simply go elsewhere. We love our contributions, he said, and the developers' priority is software, not hardware. Open architecture companies will tend to survive better, he said. Most TiVo users won't make the hacks, but that 0.01% that do enhance value for the rest.
This argument will not be settled, Ts'o asserted. It's a values argument, a religious one. GPLv3 adds restrictions, he said, and that means that GPLv2 sees GPLv3 as a proprietary license. The FSF, he said, argues that these restrictions are good for you.
GPLv3 code can have v2 code mixed in with permission, he said, but what happens if an LGPL v3 library is linked to a GPL v2 application? Is it legal? Now developers have to worry about GPL compatibility that undercuts instead of establishing new markets. It's kind of a shame, he concluded.
Competition within the existing Linux markets is becoming a problem, Ts'o said. In the early days, Bob Young, founder and onetime CEO of Red Hat, used to hand out CDs for competing distributions at events. He described this as "growing the pie," said Ts'o. The bigger the pie, the more Red Hat's piece of it was worth. The competition right now, though, is causing him some worry, he said. He called it the tragedy of the commons.
Some companies are doing the hard work while others are reaping the reward, he said. As some companies do the research and development and make the results open source, others are picking up the work and selling it in their own products. It's perfectly legitimate, Ts'o said, but there is a risk of companies ceasing to do the work if they are not the ones benefitting from it. Will there be enough investment in the mainline kernel to sustain it? Not enough people, he said, are doing code review as it is.
Who does the grunt work?
Open source is good at fixing bugs and making incremental improvements, Ts'o said. Massive rewrites are more difficult. As an example, he cited the block device layer of the Linux kernel. A need was identified in 1995 to rewrite this, but it was not done until 2003. Kernel 2.6 fixed this piece of the kernel, but required rewriting many parts of the kernel to accommodate the new system.
Most major open source projects, he said, have paid people at the core. Linux and GNOME are mostly funded engineers, he said, although KDE is more of a hobbyist project than the others. At some conferences, such as the Ottawa Linux Symposium and LinuxWorld, it is easy to think that corporate dollars are all there is, while conferences like FOSDEM and Ohio LinuxFest are not the same way. The funding of some open source developers and not others can cause tension, he said, noting Debian's controversial DuncTank project, which aimed to pay some staff fulltime temporarily to facilitate a faster release. Many people inside Debian did not like this. While some Debian developers are paid by outside companies such as Canonical, money should not invade the inner sanctum of Debian, Ts'o said people felt. How do you work within this division? We need both the hobbyists and the corporates, he said.
Moving on, he warned the audience to be wary of Microsoft. Vista's failure does not mean Microsoft is sitting on its hands. A few years ago, he pointed out, Sun was seen as dead. Now look at it. Microsoft has a lot of money, he said, and will not always make stupid moves.
Software has to be easy for everyone to use. Microsoft spends millions on usability tests. He described a process where not overly technical people are put in front of new software and told to use it. This process is recorded and the developers watch the recordings to gain a better understanding of how actual users will use the applications. He described it as similar to watching a videotape of your own presentation.
Some software, he said, will always be proprietary. Tax filing software, with its involvement of attorneys, and massively multiplayer games such as Worlds of Warcraft are examples of this. If we want to achieve world domination, he said, we must at least not be actively hostile to independent software vendors.
Windows, he says, has a huge installed base. We have equivalents to 80 to 90% of Windowsbased applications, but there are a lot of niche apps. The Linux Standard Base, he said, is a way to help us achieve a Linux where ISVs are able to produce this software for Linux.
Whither the Linux desktop? The year of the Linux desktop, Ts'o noted, seems to be n+1. But, he said, we are getting better. We are starting to see commercial desktop applications for Linux, such as IBM's Lotes Notes. Laptops are now selling with Linux installed, and we now have a decent office suite.
So what are we missing, he asked. Do we have bling? He moved his mouse around and made his desktop turn around like a cube with Compiz. Do we have ease of use? It's getting better, he said. Raw Linux desktops are pretty much on par with raw Windows desktops for usability, although we have some ways to go to catch up to Mac OS X, Ts'o said.
Do we have a good software ecosystem? We're getting there, he said, but we have that last 20% that is needed.
We have office compatibility now, too. The format is more important than the operating system, he said. If people are unwilling to try Linux, put OpenOffice.org on their Windows machines. With the advent of OOXML, Microsoft's proposed document standard, people will need to change formats anyway. We might as well change them to OpenOffice.org and the OpenDocument Format (ODF). If we win the format war, we can switch the desktops later. And getting people to try OpenOffice.org, he noted, does not require people to remove Microsoft Office.
It has been a great 16 years of Linux, Ts'o said. It's amazing what we've done. There is lots more to be done, he concluded, but nothing is insurmountable.
Following Ts'o's keynote address and a lunch break, I attended another interesting session that covered a project from a rural area not far from where I live. Ontario's BGLUG, in the counties of Bruce and Grey around Owen Sound, has gotten together with United Way to distribute donated lowend Linux machines to underprivileged students through an anonymous system with Children's Aid. Brad Rodriguez of BGLUG and Francesca Dobbyn of the United Way gave an hourlong talk discussing the details of how their project got going and its early results.
The short form is that a local government office offered Dobbyn's United Way office a handful of retired but good computers. Some of these were used to replace even older machines in her office, but she contacted the local Linux Users Group about the idea of making the machines available to high school students in need.
Families who are on government assistance must declare the value of all goods they receive, including software, and this value is taken out of their assistance cheques, so it was important, Dobbyn said, to ensure that these machines cost as little as possible and had no sustained costs. The LUG installed Linux on a number of these machines and Dobbyn, through the local Children's Aid society, distributed these machines anonymously to local students in need. They now accept hardware donations and have been keeping this up as a permanent program.
They used Linux, specifically Ubuntu 6.06, Rodriguez said, because it is free as in beer and is not at risk of immediate compromise as soon as it is connected to the Internet if antivirus and other security software isn't maintained, an impossibility when the people setting up the machines cannot contact the people using them.
Four members of the LUG have volunteered to be contacted at any time by these students for help with the machines, which are not powerful enough to run games but are strictly for helping students in need complete homework assignments in the age of typedonly essays and PowerPoint class presentations.
One of the issues schools have faced is that when the students take the work to school, they are putting their disks in Windows computers and using Microsoft Office to print out their assignments. While there is no technical issue, the school board will not allow nonMicrosoft software to be installed. Like many school boards, they are underfunded, and Microsoft donates a lot of the computer equipment on the sole condition that nonMicrosoft software be barred from these machines.
And the rest
Among the many other topics discussed at Ontario LinuxFest was a completely objective comparison of Microsoft's OOXML document standard and OpenOffice.org's ODF document standard by Gnumeric maintainer Jody Goldberg, who has had to wade through both in depth. His summary is that OOXML is not the spawn of Satan, and ODF is not the epitome of perfection. Both have their strengths and weaknesses, and he sees no reason why we could not go forward with both standards in use.
Ultimately, Ontario LinuxFest was one of the best Linux conferences I attended. Because organizers kept it to one day with two sessions, two BOFs, and the everpresent Linux Professional Institute exam room, there were few times when it was difficult to choose which session to attend. With only a handful of sessions on a wide variety of topics, the signaltonoise ratio of good sessions to filler was high. I did not find any sessions that were a waste of time.
The only sour note of the conference was that it did not break even. Although between 300 and 350 people attended, organisers literally had to pass a bag around at the end asking for contributions to offset their budget shortfall. In spite of this, I believe Ontario LinuxFest is a conference that is here to stay, and I look forward to OLF 2008.
Originally posted to Linux.com 2007-10-15; reposted here 2019-11-22.
words - whole entry and permanent link. Posted at 16:17 on
October 15, 2007
OLS closes on a keynote
The fourth and final day of the ninth annual Ottawa Linux Symposium wrapped up on Saturday with a few more session and a keynote address by Linux kernel SCSI maintainer James Bottomley.
During the day I attended a few sessions, among them one entitled "Cleaning Up the Linux Desktop Audio Mess" by PulseAudio lead developer Lennart Poettering.
Linux audio is a mess, Poettering asserted. There are too many incompatible sound APIs, with OSS, ALSA, EsounD, aRts, and others vying for the sound device. Each of these systems has limitations, Poettering said. There are also abstraction APIs, but they are not widely accepted, he said. Abstraction layers slow things down while removing functionality from the APIs they are abstracting.
What desktops lack, he said, is a "Compiz for sound." Different applications should be able to have different volumes. Music should stop for VoIP calls. It should be possible for the application in the foreground to be louder than the application in the background in X. Applications should remember which audio device to use. Hot switching of playback streams, for example between music and voice over IP, should be possible, and the sound streams should seamlessly be able to transition between speakers an a USB headset without interruption, he said.
In spite of these missing capabilities and the API mishmash, Poettering said that there are some things that Linux audio does do well among them, network transparent sound, the range of highlevel sound applications, and a lowlatency kernel at the core.
The audio mess in Linux, Poettering said, is not a law of nature. Apple's CoreAudio proves this, as does Windows Vista's userspace sound system. In the effort to improve Linux audio, he said, we need to acknowledge that although the drivers may be going away, the Open Sound System (OSS) API is not. It is the most crossplatform of the Linux sound APIs. It is important to remain compatible with the OSS API, he said, but it is necessary to standardise on a single API. Linux audio needs to stop abstracting layers, he said, and it needs to marry together all the existing APIs and retain existing features.
Poettering said PulseAudio, a modular GPLlicensed sound server that is a dropin replacement for EsounD, offers a solution to the problem. It is a proxy for sound devices that receive audio data from applications and send audio data to applications, he said. It can adjust sample rate, volume, provides filters, and can redirect sound and reroute channels. PulseAudio comes with 34 modules and supports OSS, ALSA, Solaris sound, and Win32. It even supports the LIRC Linux remote control functionality.
PulseAudio is not a competitor for the professional audio package Jack, Poettering said; it can run side by side with it. What PulseAudio is not is a streaming solution, nor a demuxing or decoding system. It is not an effort to try to push another EsounD on people. It is a dropin replacement for EsounD that will just work, superseding every aspect of EsounD and ALSA dmix in every way.
PulseAudio is included, but not enabled, in most distributions. Now that he works at Red Hat, Poettering says, maybe Fedora 8 will have it enabled by default.
James Bottomley's keynote address
This year's keynote address was delivered by kernel SCSI maintainer James Bottomley, a charismatic Brit known, among other things, for wearing a bow tie. His lively presentation was called "Evolution and Diversity: The Meaning of Freedom and Openness in Linux."
Borrowing a slide from Greg KroahHartman's 2006 keynote, Bottomley showed a picture of David and the flying spaghetti monster with the caption: "Linux is evolution, not intelligent design." Evolution is a process for selection, he said, and diversity is the input to the evolution. Evolution selects the most perfect options from the diversity tree. In nature, evolution results in only one or two or three perfect species from the diversity input. In Linux, Bottomley said, evolution is an adversarial process with the occasional bloodbath of the Linux Kernel Mailing List, patch review, testing, and taste.
Bottomley said oddball architectures like Voyager and PARISC create diversity to feed the evolutionary process of Linux. Getting architectures like these included in the kernel requires innovation. There are other constituencies with small communities that make a big difference accessibility, for example. It is not popularity and number of users that determine what gets into the kernel; anything that is done well can go in.
Evolution and diversity are battling forces, Bottomley said. Innovation is created by their give and take.
Freedom also appears as a result of this give and take. As long as the ecosystem works, you have freedom to think, innovate, and dream. Linux supports any device, large or small.
Openness is not like freedom, Bottomley said. Openness is a fundamental input to the evolutionary process, while freedom appears as a result of the process. Unless you show the code, no one can review it, and it cannot be debugged and stabilised.
If you are not testing Linux kernel rc1, you have less right to complain about the kernel, he said. Don't concentrate on distribution kernels, concentrate on upstream kernels. If you test for bugs upstream, fewer bugs will flow to the distros. Wouldn't it be nice, he asked, to find the bugs before they get into distribution kernels?
Maintainers are arbiters of taste and coding style. They are the guarantors of the evolutionary process.
They have the job of applying the process to get people to come forward and innovate.
Diversity itself acts as an evolutionary pressure, he said. Bit rot, an old but heavily used term, is the equivalent of mold, sweeping up dead things. Bit rot ensures old code is dead and gone. Bit rot is why we will never have a stable API in Linux, Bottomley said.
A lot of people are afraid of forking, Bottomley said, but the mm tree is technically a fork of the Linux kernel. He posted a slide quoting Sun CEO Jonathan Schwartz: "They like to paint the battle as Sun vs. the community, and it's not. Companies compete, communities simply fracture." The quote was part of an argument on the Linux Kernel Mailing List (LKML) earlier this year.
What does it mean that companies compete, asked Bottomley, posting a slide of the names HPUX, AIX, SYSV, SunOS, and MPRAS overlayed over a nuclear explosion. The battle of the Unixes is what it means, he said, and it left a lot of corpses and wounded a lot of customers. He warned that Schwartz is trying to portray Linux as an inevitable return to the Unix wars.
The forking that we do and the fragmentation that we do in Linux, he said, is necessary for our ecosystem.
Forking provides the energy for our evolutionary process. It is a hard idea to get your mind around, a paradigm shift, he said. No project is open source unless it is prone to forking, he stated. Go look at Solaris code, he challenged, and see how you can fork it. No one owns Linux, but all of the thousands of people who have contributed own a piece of the Linux kernel. The freedom to think, to experiment, and to fork is what drives the community. Sun is engaged in a FUD campaign to link Linux to fragmentation.
Rather than fighting this we must embrace this message, he said. Openness and innovation force forks to merge. The combination of these forks is usually better than either fork. Nature creates lots of forks, he noted. Evolution is wasteful, but Linux does this in a useful way.
We must increase the pace of the "innovation stream," Bottomley said. The process is getting faster. The increasing number of lines modified per release is not a problem, it is in fact a good thing. Increasing the pace of change must increase the evolutionary pressure.
One of the problems facing Linux, he noted, is that it is written in English, which limits the nonEnglish speaking majority of the world from contributing to the Linux kernel. We need to come up with a way of accepting patches in foreign languages, he said.
No talk about Linux is complete, Bottomley said, without a discussion of closed source drivers. When you produce a closed source driver, you cut yourself off from the community and from the evolutionary process. Bit rot is powerful against you and you are in a constant race to keep up with the kernel. Closed source drivers waste the talent of your engineers and waste money. They aren't immoral or illegal, just "bloody stupid," he said.
We often preach to the converted, he warned. Engineers at companies providing closed source drivers generally support open aims. It is the management and lawyers that have the problems. They see the code as intellectual property, and property needs defending. Encouragement is not brought about by flaming the engineers on the LKML, he warned. You have to go after the executives and the legal arm. Go to the Linux Foundation if need be, you can tell the companies. The Linux Foundation has an NDA program to produce fully open source drivers from NDA specifications. Saying he wasn't sure if it had been announced, but that if not he hereby announced it, Adaptec will be the first company to use the NDA program to get open source drivers.
The purpose of the NDA program, he said, is for companies whose specs themselves are fine for releasing, but whose document margins are full of comments from the engineers that could be construed as slanderous. HewlettPackard, for instance, once gave out some PARISC documentation with doodles containing blasts against the competition. Lawyers had to redact the doodles before it could be released. The NDA program can get around the issue of dirty little secrets like this, he explained. All we want, he said, is the driver.
To wrap up, he said, evolution and diversity put tension in the system until freedom is created in the middle. Forking is good. Whatever we are doing, we need to keep doing.
Following the keynote, the OLS organisers provided entertaining announcements and gave away prizes.
Aside from the routine announcements about the functioning of the conference, organiser Craig Ross said that this, the ninth OLS, is the first at which the organisers had not heard from the main hotel where attendees were staying, the police, or the city of Ottawa about any of the attendees.
The ninth Ottawa Linux Symposium demonstrated Andrew J. Hutton and C. Craig Ross's professionalism at putting on the conference yet again. The welloiled organisation team kept everything flowing smoothly and more or less on time, with nearly all talks recorded on high definition video, and WPAprotected wireless available throughout the conference centre.
Just a couple of days before this year's Ottawa Linux Symposium, I attended the annual Debian Conference (DebConf) in Edinburgh, Scotland. I saw fewer than five people in common between the two conferences a testament to the diversity and number of people involved in the Linux community.
Originally posted on Linux.com 2007-07-02; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 19:21 on
July 02, 2007
Thin clients and OLPC at OLS day three
The third day of the Ottawa Linux Symposium (OLS) featured Jon 'maddog' Hall talking about his dreams for the spread of the Linux Terminal Server Project (LTSP) throughout the third world as an inexpensive, environmentally friendly way of helping get another billion people on the Internet, along with an update on the One Laptop Per Child (OLPC) project, and several other talks.
Hall spoke in the afternoon at a very well attended session entitled "Thin clients/phat results are we there yet?" Hall started with his trademark trademark disclaimer where he advised that his lawyers tell him he must remind his audience that Linux is a trademark of Linus Torvalds, among other trademark warnings.
Hall says he has been in computers since 1969, from the era when computers were huge, expensive mainframes. Programmers picked up the habit of doing their work in the middle of the night at this time, he commented at least somewhat jokingly because the middle of the night was the only time students could gain access to the mainframes as the professors were not using them at that hour.
Eventually, minicomputers and timesharing were born. Computers had operators and terminal users could rely on them to handle regular backups and restorations when they made mistakes. Sometimes timeshare computers had too many users on and you could sense this as your five keypresses that you typed would return to you all together after a few seconds, Hall commented.
Finally the personal computer came along, he says. These computers, unlike their predecessors, spent most of their time idling and burning up a lot of electricity and making a lot of noise. He says that in France, there are noise level enforcers who monitor workspace noise with gauges to enforce noise limits.
PCs take up a lot of space, he continued. They dominate desks leaving little room for any other work. They are very inefficient, he says, each requiring its own memory and disk space. And they become obsolete quickly. But this talk is about thin clients, he says, not PCs.
Not to get to his point too quickly, he paused to take a few more swipes at PCs noting that desktop security is very poor, citing an example where a government employee was found bringing classified data home on her USB key to work on next to her drug dealing boyfriend. Cleaning crews, too, Hall says can be a threat to desktop security.
"What is the real problem?" Hall asked, showing a photo of his parents. His father, he says, was an airplane mechanic who took apart the family car engine and put it back together multiple times without any missing or leftover parts, without instructions. Mom & pop, he says, don't want to spend time compiling kernels, they expect their computers to just work.
There is a complex electrical appliance that mom & pop can use that just works, Hall says. It is called the telephone. The telephone network is not trivial, requiring highly skilled people to maintain it, but it is all hidden from the end user.
The Linux Terminal Server Project (LTSP) thin clients with free software, Hall says, are smaller, thinner, cheaper, lower powered, and easier to use than their PC counter parts. LTSP servers need heavy power, but are a single point of work and maintenance for users and administrators. LTSP is a hit around the world, Hall says. Old systems such as 486s can be reused as terminals. Hall cited the example of a school that used a number of donated obsolete computers and a donated server using LTSP on no budget to provide their school with a usable computer system.
Terminals generally have no local storage, Hall says, and can be easily turned off at night, and they don't lend to software piracy by their nature. He used the opportunity to go on a tangent about how he agrees software piracy is bad. Software authors have every right to say how they would like to see their software used, he says. Users are free to use or not use that software.
On the topic of piracy, he discussed how Brazil how distributed a large number of very cheap Linux computers. Within not too long, a major software corporation complained to the president of Brazil that the people were buying the computers and at a rate of around 75% were replacing Linux with pirated copies of another operating system. When asked by the government what to make of this, Hall replied that this is progress, as it is down from the national 84% piracy rate in the country.
Hall says that the piracy rate in the US is estimated at 34%, while it is 96% in Vietnam. People in Vietnam, he pointed out, make $2 or $3 per day and should not be expected to pay the $300 for a shiny CD which they know only costs $2 to get down the street.
The third world needs to be on the Internet, Hall says, but it can't afford to do it with proprietary software.
But using LTSP they could.
Hall says that global warming is a major and significant issue lately. Imagine, he says, if one billion more people brought desktops on line each using 400 to 500 watts of electricity. Much of the power used by a computer is turned to heat, he says, further adding to the cost in poorer areas by increasing the cost of air conditioning to make up for the extra heat.
Still on the issue of the environment, he says that in his home town in New Hampshire, he used to drive down to the local dump, and later he drove up to the local dump, and eventually it was closed because it was so full that water runoff was starting to affect the drinking water supply for his town. Thin clients are very small, he says, able to fit in the back of a monitor. Thin clients have no moving parts, no noise, and provide a good lifespan so they don't end up in landfills quite so quickly.
LTSP should be used to create a new open business model, he says. Allow people to become entrepreneurs with LTSP. Train people to provide LTSP services, he continued, reminiscing on the early days of ISPs before the big companies had the Internet figured out. Local ISPs used to be small local businesses where the clients could actually talk to the operators of the business in a meaningful way, but these were eventually bought up by larger companies and the service deteriorated to the point of becoming similar in level to that of large software companies. LTSBbased net cafes could exist under this model, he suggested.
In South America, Hall says, 80% of people live in urban environments. Basic services are very expensive when they're available at all. An income of $1,000 a month is considered very good in most cases. One hundred clients would provide this level of income to an LTSP entrepreneur, Hall says, and could provide phone or Internet radio services.
Hall says his goal is to have 150 million thin clients in Brazil, requiring between one and two million servers and creating about two million new high tech jobs in the country. It would create a local support infrastructure and could realistically be done with private money, he says. It would create useful high tech jobs and on site support by entrepreneurs.
Linux on Mobile Phones
Another of the sessions of the third day was one entitled the Linux Mobile Phone Birds of a Feather (BoF) by Scott E. Preece of Motorola Mobile Devices and the CE Linux Forum. Preece started by introducing himself and his subject, noting that around 204 million handhelds with Linux are expected to be sold in the year 2012. Motorola expects much of its handheld lineup to run Linux, he says.
Linux is a good platform for experimentation, and by its nature provides good access to talent. Lots of people want to learn Linux, while not as many are interested in learning to develop for Symbian or Windows. Linux, he says, is a solid technology. It can be configured for small systems while retaining large system capabilities. It can be modified to suit needs as appropriate.
Companies using Linux for handhelds have been banding together lately to form a number of collaborative initiatives. There are four major ones, he says, listing the Linux Foundation, the Consumer Electronic Linux Forum, the Linux Phone Standards Forum, and the Linux in Mobile Foundation as well as two open source projects working on the topic.
One of these is GNOME which has a community effort to address mobile phones, and the other is OpenMoko, which aims to have a completely free mobile phone stack except for the GSM stack which is hardcoded into a chip. The OpenMoko project, Preece says, is a community style, codecentric, project under a corporate structure.
Preece described the various foundations at some length before tension began to build in the room over his employer's apparent GPL violations. Motorola, says several members of the audience, has been releasing Linuxbased handheld devices already, but in violation of the GPL, has not been providing access to the source code for these devices.
David Schlesinger of Access Linux says that his company would be releasing all the code for its phones no later than the release of the phones themselves, to much enthusiasm.
Preece promised to once again push his company to take the GPL complaints seriously and try to address them.
One Laptop Per Child
The last session I attended on the third day was a BOF about the One Laptop Per Child (OLPC) project presented by OLPC volunteer Andrew Clunis.
"It's an education project, not a laptop project", Clunis began, quoting OLPC founder Nicholas Negroponte. A high quality education is the key to growing a healthy society, he continued, and an inexpensive laptop computer for every child in the world is a good way of doing it.
Children learn by doing. Until they are about five, a child only learns what they are interested in learning, at the end of which you go to kindergarten and enter the instruction/homework cycle of modern education, Clunis says. The OLPC project helps kids learn by doing and by interacting.
Collaboration is paramount. If our network connections go down, he says, our laptops become warm bricks. OLPC laptops take this to heart by including networkability of applications directly in the human interface specification.
Everything, Clunis says, from the firmware stack to the applications are free software. It needs to be malleable, as he put it. OLPC is not interested in the consumer laptops of the west.
The laptops themselves depend on as little hardware as possible. They use 802.11s ESS mesh networking for their connectivity. Mesh networking allows each laptop to relay data for other laptops to reach the access point even if they are out of range of it directly. The access point itself, for its part, Clunis says will likely be connected to the Internet by satellite in most cases.
The recharge mechanism for the laptop can use pretty much anything that generates power, although the prototype's crank has been replaced with a pull chord to use the more powerful upper arm muscles instead of the relatively weak wrist muscles, Clunis says. As a result of this means of keeping the battery charged, power management is very important. He described the power usage of the laptops as an "order of magnitude" lower than the 20 or 30 watts typically used by modern laptops.
The laptop is powered by a 466MHz AMD Geode LX700 processor with 256MB of RAM, a 1GB flash drive on jffs2 with compression to bring its capacity to around 2GB, a specialized LCD, "CaFE" ASIC for greater NAND access, a camera and an SD card slot. The laptop itself spots several USB ports and jacks, and has no mechanically moving parts, although the laptop has wireless antennas that flip up and a monitor that swivels on its base.
An OLPC laptop was circulated through the audience at the session, often getting caught for extended periods at individuals fascinated by the tiny, dinnerplatesized machine.
A member of the audience indicated that the first shipment of OLPC laptops is due to be shipped this fall.
1.2 million OLPC laptops are expected to go to Libya at that time.
Clunis explained that the laptop frames are color coded by order to help track black market and theft. The laptops themselves are designed to be relatively theft resistant by not being useful outside of range of its parent access point and by having some form of key required to use the machine.
There was a good deal of cynicism in the room about the value of these laptops in the many parts of the world where the children who would be receiving them are far too hungry to really appreciate the value of the education from them beyond what food their parents may be able to purchase by selling the machines.
Similarly some people present felt that the laptops would be difficult for their owners to hold on to without being stolen in many parts of the poorest countries where owning things is difficult due to high crime rates. Clunis did not have a clear answer to these concerns.
The third day of OLS wrapped up with my laptop taking a tumble down the stairs between the second and third floors of the venue, though the eightyearold Dell doesn't seem to be any worse for wear, save for a broken clip. The next day of the conference is the last, with kernel SCSI maintainer James Bottomley's keynote to wrap up the conference.
Originally posted on Linux.com 2007-06-30; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 15:40 on
June 30, 2007
Kernel and filesystem talks at OLS day two
Greg KroahHartman kicked off the second day of the 9th annual Ottawa Linux Symposium with a talk entitled "Linux Kernel Development How, What, How fast, and Who?" to a solidly packed main room with an audience of more than 400 people.
KroahHartman set up a large poster along the back wall of the session room with a relational chart showing the links between developers and patch reviewers for the current development kernel, along with an invitation for all those present whose names are on the chart to sign it.
He launched into his talk with a bubble chart of the development hierarchy as it is meant to work, showing a layer of kernel developers at the bottom who submit their patches to about 600 driver and file maintainers, who in turn submit the patches to subsystem maintainers, who in turn submit them to Andrew Morton or sometimes Linus Torvalds directly.
Andrew Morton was selected to maintain the stable tree while Linus worked on the development tree, KroahHartman says, and explained how Morton merges all unstable patches into a tree called the mm tree, which stands for memory management, Morton's historical subtree and Linus actually maintains the stable kernel tree. While they will never admit it, says KroahHartman, they now work effectively the opposite of how they intended.
Patches are submitted up the previously mentioned hierarchy and each person who reviews the patch and sends it up puts their name in the signedoffby field of the patch. The large chart KroahHartman printed and posted is made up of these signedoffby field relations and shows that the actual relationships between the layers of developers does not operate quite the way the bubble chart he showed suggests.
A stable release of the kernel comes out approximately every 2.5 months, Kroah Hartman says. Over the 2.5 years since the 2.6.11 kernel, changes to the kernel have taken place at an average rate of 2.89 patches per hour, sustained for the entire 2 and a half years. Kernel 2.6.19 alone sustained a rate of change of 4 changes per hour, KroahHartman showed on a graph.
In kernel 2.6.21 there are 8.2 million lines of code. 2,000 lines are added per day, with 2,800 lines modified each day, every day, on average, KroahHartman says. He noted that the actual number is a matter of debate, but believes his numbers to be a relatively accurate reflection of reality.
Asked how these numbers compared to Windows Vista, KroahHartman says that the Vista kernel is smaller, but does not contain any drivers so the comparison is not possible to do accurately. Kroah Hartman was also asked if each dot release was the equivalent of the 2.4 to 2.6 kernel changes. No, he responded, but it takes around six months to do an equivalent amount of change.
Returning to his presentation after a flurry of questions, KroahHartman compared the various parts of the kernel and noted that the 'arch', or architecture, tree is the largest part of the kernel by number of files, but that the drivers section of the kernel makes up 52% of the overall size, with the comparison set being core/drivers/arch/net/fs/misc. All six sections seem to be changing at roughly the same rate, he noted. Linux is more scalable than any other operating system ever, KroahHartman boasted. Linux can run off everything from a USB stick to 75% of the top world supercomputers.
KroahHartman went on to explore the number of developers who contribute to each release of the kernel and found that there were 475 developers contributing to the 2.6.11 kernel, over 800 in the 2.6.21 kernel and 920 in the not yet released 2.6.22 kernel so far. He noted the number could be off a bit as kernel developers seem to have a habit of misspelling their own names, although he says he did his best to correct for that.
The kernel development community is growing rapidly, KroahHartman says. In the initial 2.6 kernel tree there were only 700 developers. In the last two and half years from 2.6.11 to 2.6.22rc5, around 3,200 people have contributed patches to the kernel. Half of contributors have contributed one patch, a quarter have contributed two, an eighth have contributed three, and so forth, KroahHartman noted as an interesting statistic.
By quantity, KroahHartman says, and not addressing quality, the biggest contributors of patches to the Linux kernel are Al Viro in first place with 1,339 patches in the 2.5 year window, David S. Miller with 1,279, Adrian Bunk with 1,150, and Andrew Morton with 1,071. He listed the top 10 and indicated that a more extensive list is available in his whitepaper on the topic. The top 30% of the kernel developers do around 30% of the work, he says, representing a large improvement from 2.5 years ago when the top 20% of kernel developers did approximately 80% of the work.
With statistics flying, KroahHartman listed the top few developers by how many patches they had signed off on rather than written. First was Linus Torvalds at 19,890, followed by Andrew Morton at 18,622, David S. Miller at 6,097, Greg KroahHartman himself at 4,046, Jeff Garzik at 3,383, and Saturday's OLS keynote speaker James Bottomley in 9th place at 2,048. Sometimes, KroahHartman admitted, it feels like all he does is read patches.
Next, he talked about the companies funding kernel development. Measured by number of patches contributed by people known to be employees of companies and without accounting for people who changed companies during the data period, Red Hat came in second place with 11.8%, Novell in third at 9.7%, Linux Foundation in fourth at 8.1%, IBM in fifth at 7.9%, Intel in sixth at 4.3%, SGI in eight at 2.2%, MIPS in ninth at 1.5%, and HP in the number ten spot at 1.3%.
The keeneyed observer will note that he omitted first and third place in this list initially. The seventh place was people known to be amateurs working on their own, including students, at 3.9%, and in first place were people whose affiliations were unknown who had contributed fewer than ten patches each making up 33.2% of total kernel development in the 2.5 year period.
KroahHartman made the point that if your company is not showing up in the list of contributors and you are using Linux, you must either be happy with the way things are going with Linux or you must get involved in the process. If you don't want to do your own kernel contributions, KroahHartman says, you could do what AMD is doing and subcontract his employer, Novell, or another distribution provider, or for less cost contract a private consultant to make needed contributions to the kernel for your company's needs.
KroahHartman's talk, as his talks always are, was entertaining and lively in a way difficult to portray in a summary of his content.
The new Ext4 filesystem: current and future plans
The next talk was held by Avantika Mathur of IBM's Linux technology centre on the topic of the Ext4 filesystem and its current and future plans. Why Ext4, Mathur asked at the outset. The current standard Linux filesystem, Ext3, has a severe limitation with its 16 terabyte filesystem size limit. Between that and some performance issues, it was decided to to branch into Ext4.
Mathur asked why not XFS or an entirely new filesystem? Largely, she explained, because of the large existing Ext3 community. They would be able to maintain backward compatibility and upgrade from Ext3 to Ext4 without the lengthly backup/restore process generally required to change filesystems. The XFS codebase, she says, is larger than Ext3's. A smaller codebase would be better.
The Ext4 filesystem, available as Ext4dev starting in Linux kernel 2.6.19, is an Ext3 filesystem clone with 64 bit JBDs, which I am not entirely sure what are. The goals of the Ext4 project, she explained, are to improve scalability, fsck (filesystem check) times, performance, and reliability, while retaining backward compatibility.
The filesystem now supports a max filesystem size of one exabyte. That is to say, Ext4 can hold 260 bytes with 48bit block numbers times 4KB per block, or 1,152,921,504,606,846,976 bytes. Mathur predicted this should last around five to ten years, after which time the support for filesystems actually that large can be seen and a move to 64 bit filesystems can be attempted, and should be possible without having to go to Ext5 based on Ext4's design.
Ext3 uses indirect block mapping, Mathur says, while Ext4 uses extents. Extents, she explained, use one address for a contiguous range of blocks. One extent can assign up to 20,000 blocks to the same file, and each inode body can hold four extents.
Mathur went on to describe features in or soon to be in the Ext4 patch tree. Ext4 will feature persistent preallocation, where space is guaranteed on the disk in advance, with files being allocated space without the need to zero out the data. Using flags, Mathur explained, these extended allocations will be flagged as initialized or uninitialized. Uninitialized blocks, if read, will be returned as zeros by the filesystem driver.
Also in Ext4, reported Mathur, is delayed allocation. Rather than allocating space at the time a file is written to buffers, it is allocated at the time the buffers are flushed to disk. As a result, files are able to be kept more contiguous on disk and shortlived or temporary files may never be allocated any physical disk space. Ext4 also sports a multiple block allocator which can allocate an entire extent at once. Also, Mathur says, an online Ext4 defragmenter is in the works.
Filesystem checks are a concern with one exabyte filesystems, noted Mathur, for their speed. Using current fsck, it would take 119 years to check the entire filesystem. The version of fsck for Ext4 will not check unallocated inodes, Mathur says, among other improvements. She showed a performance chart showing significant performance improvements in e4fsck over its predecessors.
Ext4, has a number of scalability improvements over Ext3, Mathur says, raising the maximum filesize from two terabytes to 16 terabytes, with the file size limit being left there as a performance versus size tradeoff. Ext4 has 256 byte inode entries, up from 128 in Ext3, and Ext4 introduces nanosecond timestamps instead of the second timestamps in Ext3. Mathur explained that this is because with the speed of operations now, files can be modified repeatedly in a single second. Also, she says, Ext4 introduces a 64bit inode version numbers for the benefit of NFS which makes use of it.
Mathur wrapped up her presentation with a thanks to the 19 people contributing to the Ext4 project.
The evening of the this second day of OLS saw the second of two corporate receptions with IBM putting on a spread and giving away prizes while talking about its Power6 processors in one of the conference rooms.
The Power6 processor, demonstrated briefly by Anton Blanchard, contains 16 4.7GHz cores. He demonstrated a benchmarking tool that compiles the Linux kernel 10 times for effect. It accomplished this in just under 20 seconds.
Of particular note, Blanchard reported that IBM uses Linux to test and debug the Power6 chips. Following this, the hosts of the reception gave away several PS/3s, some Freescale motherboards, and some other smaller bits of swag as door prizes. Thus ended the second day of four days of OLS 2007.
Originally posted on Linux.com 2007-07-29; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 01:13 on
June 29, 2007
Day one at the Ottawa Linux Symposium
The opening day of the 9th annual Ottawa Linux Symposium (OLS) began with Jonathan Corbet, of Linux Weekly News and his now familiar annual Linux Kernel Report, and wrapped up with a reception put on by Intel where they displayed hardware prototypes for upcoming products.
The Kernel Report
Corbet's opening keynote began with a very brief history of Linux, showing the kernel release cycle since it was started in 1991. He made the point that the kernel has gone from a significant release every couple of years to one every couple of months over the last few years, with every point release of the kernel being a major release. Today, every point release has new features and API changes.
The release cycle today is very predictable, says Corbet, with kernel 2.6.22 anticipated in July and 2.6.23 expected around October. The cycle starts with a kernel, say 2.6.22rc1, then a second release candidate is made available, and a third (if necessary), and so on until the release candidate becomes stable, and work begins on the next kernel.
Each kernel release cycle has a 2 week merge period at the start where new features and changes are introduced culminating in the release of rc1, or release candidate 1, followed by an 812 week period of stabilization of the kernel with its new features. At the end of this process the new stable version of the kernel is released.
This release cycle system, explained Corbet, was introduced with the 2.6.12 kernel and the discipline within the kernel development community was established within a few releases. He demonstrated this with a graph showing the number of cumulative lines changed in the kernel over time and kernel version numbers which clearly showed a linear pattern of line changes evolving into a kind of stair case of kernel line change rates.
Since kernel 2.6.17 from June of 2006, Corbet reported, two million lines of kernel code have changed with the help of 2,100 developers in 30,100 change sets.
This release process, Corbet says, moves changes quickly out to the users. Where it used to take up to two or three years for new features to be introduced to the stable kernel, it can now take just a few months. This also allows Linux distributors to keep their distributions closer to the mainline kernel. Under the old kernel development model, Corbet continued, some distributions' kernels included as many as 2000 patches against the mainline kernel. With the rapid release cycle, distributions no longer have any significant need to diverge from the mainline kernel.
But it's not all perfect. Corbet noted that among the things that is not working well is bug tracking, regression testing, documentation, and fixing difficult bugs. Some bugs require the right hardware in the right conditions at the right phase of the moon to solve, he commented. As a result, kernels are released with known bugs still in place.
What's being done to address this? Better bug tracking, stabilization (debugging)only kernel releases, and automatic testing are a few of the tactics Corbet listed. Things in the kernel are getting better over all, he says.
Corbet continued with his predictions for the future of the kernel with the disclaimer that these are only his opinions of what is to come. First, Corbet says the soontobe released 2.6.22 kernel will include a new mac80211 wireless stack, UBI flashaware volume management, IVTV video tuner drivers, a new CFQ/IO stack, a new FireWire stack, eventfd() system calls, and a SLUB allocator.
On the future of scalability, Corbet says that today's supercomputer is tomorrow's laptop. Linux' 512 processor support works well, he says, but the 4096 processor support still needs some work. The other side of scalability, such as operating on cell phones, Corbet noted, is less well represented in the kernel than its supercomputing counterparts.
Corbet says that filesystems are getting bigger, but they are not getting any faster. As drives and filesystems continue to expand, the total time needed to read an entire disk continues to go up. Most filesystems currently are reworks of 1980s Unix filesystems, he went on, and may need redoing.
Among the changes he predicted in the land of the filesystems is a smarter fsck that only scans the parts of the drive that were in use. Corbet says that a new filesystem that just came out in the last few weeks called btrfs is extents based, supports subvolumes and snapshotting, checksums, and allows online fsck.
Offline fsck is very fast in it by design, says Colbert, noting that btrfs is far from stable.
The Ext4 filesystem, the successor to the current Linux Ext3 filesystem, should be coming out soon, says Corbet. It will feature the removal of the 16TB file filesystem limit with 48 bit blocks, extents, nanosecond timestamps, preallocation, and checksummed journals.
The Reiser4 filesystem, the successor to ReiserFS, has stalled, says Corbet, with Hans Reiser no longer able to work on it. To move forward, Corbet says that Reiser4 needs a new champion.
The last filesystem Corbet mentioned is LogFS, a flashoriented filesystem with nmedia directory trees.
Virtualisation is becoming more than just a way to get money from venture capitalists, joked Corbet. Xen is getting commercial development and may finally end up in the mainline kernel tree in 2.6.23. He also mentioned Lguest and KVM, the latter a full Virtualisation system with hardware support and working live migration that was merged into the kernel in 2.6.20 although it is still stabilizing.
Corbet also brought up containers, a lightweight virtualisation system in which all guests share one kernel.
Several projects are working on this, he says, but noted that they must all work together as multiple container APIs would not work.
Next, Corbet predicted new changes to CPU scheduling, a problem once believed solved, with the introduction of the Completely Fair Scheduler (CFS), which, as Corbet put it, dumps complex heuristics and allocates CPU time in a very simple fairness algorithm. He says he expects this possibly in kernel 2.6.23.
Corbet also says he foresees progress with threadlets, or asynchronous system calls. If a system call blocks, the process will continue on a new thread and pick up the results from the blocked system call later. Among the other places he predicted change was in the realm of power management, video drivers, and tracing, with the help of soontocome utrace, an inkernel tracer.
Last but certainly not least, Corbet says that the Linux kernel is not likely to go to version 3 of the GPL owing to the fact that the entire Linux kernel is explicitly licensed under version 2 of the GPL so even if the will develops to relicense it, it would be difficult.
Piled higher and deeper
The next talk I went to was by Josef Sipek entitled "Kernel Support for Stackable File Systems." Briefly, Sipek explained what stackable filesystems are and how they work in some rather extensive detail and discussed improvements on the way.
They're called stackable filesystems because several of these can be stacked together on top of one actual filesystem to give the filesystem additional functionality.
A stackable filesystem is a virtual filesystem layer that wraps around a filesystem. Sipek listed a number of examples with largely selfexplanatory names of stackable filesystems. Among them, ecryptfs, a stacked filesystem layer that encrypts data on its way to the actual filesystem, gzipfs, a filesystem that compresses data, unionfs, a filesystem that combines multiple filesystems, replayfs, a stacked filesystem that can replace calls to the actual file system to assist with debugging, and a number of others.
At the end of the day, Intel held its annual reception in an upstairs room with free food and alcohol. The company used to provide speakers during this event as well, but as they did last year, this year an Intel employee rose to announce that there would be no speeches, just a technology demonstration in the corner of some of the company's latest. He thanked those at OLS for their continued important work and the notion of free alcohol with no speakers washed over well with enthusiastic applause.
At their technology desk I found an interesting looking device designated the ZI9, a soontobe announced prototype of a mobile Internet device measuring a bit over 4 inches across by a bit over 6 inches long. By the time I got to it the battery was dead, but it looked like a cross between a Blackberry, and a tablet laptop.
The device had a miniature keyboard and a large monitor on a swivel to cover the keyboard or operate as a useful monitor, as well as a small camera on a skewer across the top of the device. This device is apparently designed to run Linux.
OLS continues through Saturday, June 30th. We'll have additional reports throughout the week.
Originally posted to Linux.com 2007-06-28; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 01:18 on
June 28, 2007
DebConf 7 positions Debian for the future
At last week's DebConf 7 Debian Conference in Edinburgh, Scotland, nearly 400 attendees had a chance to meet and socialise after years of working together online. They attended more than 100 talks and events, ranging from an update by the current and former Debian Project Leaders to a group trip to the Isle of Bute, off the opposite coast of the country.
Throughout the conference, socialising Debian developers could be heard discussing the finer points of programs they maintain and use and ways things could be improved.
Sometimes a laptop would open and something would get fixed on the spot.
The conference itself was held at the Teviot Row House in Edinburgh, just a few minutes south of the downtown core of the historic city. The venue had several presentation and discussion rooms, and all rooms were equipped with video cameras for recording the proceedings.
The conference opened with a welcome from the organising team and an update from the former and current Debian Project Leaders (DPL).
Former DPL Anthony Towns of Australia opened with a recollection of his term as DPL. Towns noted that the first month of a DPL's term is taken up with media interviews asking questions like "what's it like to be DPL?" before he has the answers. After that, the DPL's role is mainly dispute resolution and money allocation.
Towns introduced Sam Hocevar of France, the new DPL, who has been in the role only a couple of months. Hocevar got into what it is he wants to see for the future of Debian, saying that he would like to see a sexier more efficient distribution that integrates all the desktop components.
Debian's quality assurance team should focus not only on license compliance, Hocevar continued, but on the quality of software. Every package, for example, should have proper man pages. And Debian should be aiming for faster release cycles.
On the topic of efficiency he referred to his DPL election platform. Don't rely on Debian's teams to do what you can do yourself, he asked. Citing Wikipedia's policy, he said "be bold." If something needs doing, just do it.
He also asked that each developer set up his work within Debian so that if he gets hit by a comet or gets married his work can go on. Even a relatively short period of inactivity when the goal is a faster release cycle can have a big impact, he warned.
Hocevar asked developers to take back some of Ubuntu and other distributions' work. Don't ignore their work just because you think Debian is better, he said.
We need better communication, he went on. Use debiandevelannounce mailing list for things you do. Put everything you do in a public place for future use and reference. Look at patches implemented in other distributions and use them where applicable, improving Debian and Free Software at large together.
In the Q&A session that followed, Hocevar noted that technical expertise in package maintenance should not be the primary requirement for becoming a Debian developer, citing translators as an import development role in which package maintenance itself is not the critical aspect.
Near the end of the lengthly Q&A session he was asked what he thought about governance reforms in Debian. Noting that he has not felt overwhelmed as the DPL, he indicated that he did not see a need for such things as a DPL team but reserved the right to change his mind in the future.
Evolving Debian's Governance
The topic of governance, governing committees, and conformance with Debian's constitution came up several times over the course of the conference. The next talk on the topic was by Andi Barth, entitled "Evolving Debian's Governance."
Barth started out by noting that while Debian's constitution should be central, it is not, yet the system does actually work. Any improvements made then need to be done cautiously to ensure they actually improve things.
He summarised a bit how the governing structure of Debian currently works, describing Debian's famous flame wars as a form of governance. General Resolutions where all Debian Developers are invited to vote on a particular issue are long and painful and not often used. There is a Technical Committee responsible for technical policy and technical decisions where two developers overlap. The DPL has the power to delegate power, and the Quality Assurance team has the power to forcefully upload packages.
So what improvements are needed, he asked. Does Debian need a DPL team? A Social Committee, which would itself become the topic of its own talk some days later in the conference, or perhaps a reform of the DPL delegates system?
In the discussion that ensued it was suggested that Debian should not fire developers over technical violations of the constitution where good work is still being done.
Another commenter stated unequivocally that the problem with Debian is that some infrastructure teams cannot be overridden. They are not elected or accountable and can make final decisions to which Barth responded that on the other hand the people doing their kind of work should not be arbitrarily overridden.
The constitution exists for that reason, to not give the DPL the ability to fire people over the colour of their hair.
Sam Hocevar, the DPL, chimed in that there is no sense in risking the participation of important contributors to the Debian project at large by firing them from a specific role.
Another participant noted that when people are causing trouble only due to lack of time to contribute, they should be eased out and replaced rather than engaged in a flame war by others. It would be helpful, said the same commenter, if Debian experimented with telling people they have been "hit by a bus" to see how the project can cope without specific people.
An SPI first
During DebConf 7, Software in the Public Interest, the legal umbrella behind Debian and several other projects in the US, held a meeting of its Board of Directors in person, a first for the board with members from four countries.
The DPL noted that in person meetings tend to be beneficial and suggested that if Debian Developers need to get together to resolve something that they should contact him. Debian has the money should developers need to meet.
Still on the topic of governance, Andreas Tille hosted a discussion session during the conference entitled "Leading a Free Software project" in which he sought to answer three basic questions: What to lead? Who to lead? How to lead?
At the core, he asked, what is the motivation to work on Free Software? Getting something to work for yourself, he answered himself. There is no ready solution to work on a certain task, so you start coding. If you release the code as Free Software you can pick up some colleagues who have the same or similar needs, and then you start splitting the work and improving the code. Releasing Free Software, he said, is a clever thing to do. Releasing code is a way to make friends.
His example for Free Software project leadership is the DebianMed distribution, a Debianbased distribution for the biomedical community. He became leader, he said, by issuing an announcement of the project. If you want to avoid becoming a project leader, he cautioned, don't be the one to announce the project.
If you take care of the infrastructure for the project like the Web page and the mailing list you are continuing on the path to becoming the project leader by default, he cautioned; you have done some work, so now you are the leader. If you try to do reasonable things to bring the project forward, people tend to draw you into a leadership position. He called this type of leadership a "Doocracy" he who does, rules.
Who to lead? Free Software developers individualists, he said. They just behave differently from normal computer users. They want to dive deeper into the project and not necessarily accept what others present. Developers, he warned, often refuse to accept leadership; just look at their Tshirts. They do, however, accept leadership from people they respect and who they would normally otherwise agree with. To that end, if you do not have a technical background, you will simply not be accepted as a Free Software leader, he warned.
On effecting decisions, Tille noted that you have no handle to force your developers. If you force people they will simply go away. There is no other motivation than working on their own project. If people are forced to do things they do not want to do, they will simply leave the project. The relationship is different from the employeremployee relationship.
There is also the risk of projects forking, he commented, which is not normally a good course of action.
Differences between developers and the project leadership is generally the source of forks.
Tille's talk finished with an extension discussion on how leaders may or may not have control over project developers. Tille summarised it nicely by saying, "You have to be clever to be a leader."
A mission statement for the project and period reports, good communication, including in person or by telephone as needed, is important for the good functioning of a project, participants agreed.
Be nice to your people, Tille advised. Sometimes people in a position of leadership tend to become harsh toward others. If someone does a good job, say so. Positive reinforcement works. Don't reject things out of hand and be an example to follow. It's about taking a leading role, not about being a strong leader, he said.
Thursday, Alex "Tolimar" Schmehl hosted a discussion group entitled "Debian Events Howto," with an eye toward helping Debian Developers organise booths at conferences when invited.
The problem, Schmehl said, is that Debian is invited to participate at conferences and trade shows around the world, and currently is not able to keep up with the demand. The organization declined at least 15 invitations last year due to a shortage of volunteers an unfortunate situation, he said.
Schmehl said he learned how to organise a booth the hard way when organising one for LinuxWorld in Frankfurt some years ago. In spite of his lack of experience, he said, it went well. Organising a booth, he assured his discussion group's participants, is easy.
Start by brainstorming about what is needed. The short list is, in order determined by the room: Tshirts and merchandising goods, name tags to help gain the trust of people coming to talk to the booth attendants, something to look at other than the volunteers themselves, such as a demonstration computer, and a code of conduct for participants that includes a reminder that they are not there to hack in the corner.
DVDs and CDs are, of course, critical. People come to the booth for Debian, after all. Burned DVDs or CDs are fine, Schmehl said, and can even be burned while people wait, giving them more of a chance to talk about Debian and learn more about it while they wait.
Critically, have a place for technical support at the location. People often show up with laptops or full desktop systems seeking help with Debian, and a place in the booth that is relatively out of the way to help these people is important, he advised.
Someone asked how you could expect volunteers to attend with a code of conduct. Schmehl pointed out that developers already contend with a code of conduct to make Debian packages. The same idea applies to volunteering in a booth. There need to be some minimum criteria for being a booth volunteer, he said.
Another person suggested charging a nominal fee for CDs so that they last beyond the first hour of the show. Schmehl suggested making CDs for a minimum donation of zero cents.
Posters and flyers are important and help visitors remember Debian, Schmehl noted. Participants suggested that there should be a set of slides running in a loop.
If at all possible, keep at least one Debian Developer around at the booth, even if they step away for a few minutes. Some people, Schmehl said, come in from far away to see the Debian booth specifically in order to get their GPG key signed and get into the Debian keyring. Finally, Schmehl said, keep pens and papers around to write notes out for visitors.
Schmehl posted three photos of booths from different conferences. The first showed a group of people crowded around a laptop with a scantily clad woman as the background and he asked the audience to spot the problems with this booth. Being a geek crowd, the first problem noted was not the background but the fact that the people were facing away from the conference aisle. Among other problems was the lack of posters or other identifying marks around the booth.
Among the things he warned about was ensuring that everyone present at the booth have the passwords needed to use any demonstration computers to avoid the embarrassment of a visitor coming to the booth and wanting a demonstration, but no one present being able to actually give them one.
It is important for there to be more than one attendee so that people are able to take breaks from booth duty. Taking care of the volunteers with snacks and breaks, Schmehl pointed out, is important, lest the volunteers become unpleasant or aggressive over the course of the day. Also, Schmehl suggested ensuring that the volunteers be dressed appropriately for the type of conference or trade show. A businessperson oriented conference requires better dress than a developers' conference, for example. Find a place to keep volunteers' personal belongings out of sight. Figure out what people will want to know in advance, too, he advised. Check Schmehl's event howto and the Debian booth information page for more information about booth FAQs and both information.
Critically, a member of the group concluded, if you accept an invitation, show up.
In times to come
DebConf 8 will take place in Argentina, and DebConf 9 will most likely be in the region of Extremadura, Spain, which is the only bidder for the 2009 conference to have put on a strong bid for the event.
Originally posted to Linux.com 2007-06-25; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 01:24 on
June 25, 2007
OLS Day 4: KroahHartman's Keynote Address
The fourth and final day of the 2006 Ottawa Linux Symposium saw the annual tradition of the closing keynote address, this year by Greg KroahHartman, introduced by last year's keynote speaker, Dave Jones, and the announcement of the next year's speaker.
Dave Jones introduced Greg KroahHartman of Novell's SUSE Laboratories, noting in his introduction, among other things, that in his analysis of kernel contributors, sorted by volume, Greg was high on the list.
He is responsible for udev, for which we should beat him later, Jones noted, adding KroahHartman has spent two years working on a crusade to remove devfs from the filesystem (to great applause). Jones described KroahHartman as approachable, and highly diplomatic, calling his approach to kernel communications diplomacy at its best. He punctuated this with a photo of KroahHartman sitting at a table with his middle finger raised in a most diplomatic pose.
KroahHartman began by noting that he is sure his daughter appreciates the photo.
KroahHartman's keynote was entitled "Myths, Lies, and Truths about the Linux Kernel".
He started with a quote: "My favorite nemesis is that plug and play is not at the level of Windows." Surely, he said, such a quote must come from someone not educated in the ways of Linux. It must be from someone unfamiliar with the system and its progress. He went on to the next slide, and the quote became attributed. It was said by Jeff Jaffe, the CTO of Novell. Surely, then, continued KroahHartman, it must have been said a long time ago! Slide forward: Jaffe said it on April 3rd, 2006.
Linux, said KroahHartman answering the charge, supports more devices out of the box than any other operating system ever has. Linux is often even ahead of the pack, being the first operating system to implement both USB2 and bluetooth.
Linux, he continued, supports more different hardware platforms than any other operating system.
Someone shouted from the audience, 'what about NetBSD!' to which KroahHartman retorted that Linux blew away NetBSD about three years ago.
Everything from cell phones to radio controlled airplanes to 73% of the top supercomputers in the world run Linux, he said. Linux scales.
Mr. Jaffe, he commented, should try his own product. We are doing something really good, he continued.
The "kernel has no obvious design", or roadmap, Kroah said, citing the next fallacy about Linux he intended to attack. Marketing departments like roadmaps and design paths, he said, but Linux does not provide them. Linux, he said, has created something noone else ever has.
"Open Source development violates almost all known management theories." Dr. Marietta Baba, Dean of the Department of Social Science at Michigan State University, he quoted.
He posted a slide showing a picture of a painting of a naked man from what appeared to be a religious context next to a squidlike animal with a number of weird anomalies, and a quote: "Linux is evolution, not intelligent design." Linus Torvalds.
Linux started off being (barely) supported on a single processor, KroahHartman noted. Then someone offered to fix it to run on another processor, and the process of evolution was well under way. Linux evolves by current stimuli, not by marketing department requirements, he said.
The only way to help the evolution, he continued, is to provide code to the kernel. Ideas without code backing them won't get far.
Linux implemented the POSIX standard six or eight years ago, he said. The evolution of the kernel is fast now, with around 6000 patches per major release. It is changing faster than ever, and becoming more stable than ever.
He moved on to the next myth, paraphrasing a common one: "the Kernel needs a stable API or no vendors will make drivers for Linux." For those who don't understand it, he said, an API is how the kernel talks to itself. He suggested reading Documentation/stable_api_nonsense.txt in the kernel source directory for more information on the topic.
Linus doesn't want a stable API, he said. The USB stack, for one, has been reimplemented three times so far. Linux now has the fastest USB stack available, limited only by the hardware. Linux is lean and complex.
Windows, too, has rewritten the USB stack 3 times, he noted, but all three of them have to stick around in the system to support the various and uncontrolled old independent drivers kicking around. Linux' native support for drivers means that independent drivers are not a problem and therefore that the API can be rewritten as needed without keeping older versions kicking around in the kernel. Windows has no access to the drivers and cannot adjust the API as a result.
The next myth he poked a hole in is the notion that, "my driver is only for an obscure piece of hardware. It would never be accepted into the mainline kernel." We have an entire architecture, KroahHartman countered, being used by just two users. There are lots of drivers, he said, that have but one user.
A company contacted KroahHartman, he said, to find out about putting a driver into the kernel for an obscure task that they needed to do. The driver was put in, and several other companies that also had to do similar tasks no longer needed to maintain their own versions. The contribution became useful on a more widespread basis in a way that could not have been foreseen and is now used to support thousands of devices. Just get your code in, he said, people might actually use it.
He went on to address the problem of closed source and binary kernel modules. Every IP lawyer he has talked to, he said, regardless of who they work for, has agreed on one fundamental point: "Closed Source Linux kernel modules are illegal." The lawyer's can't say it in public, he said, but he can.
He suggested not asking legal questions on the Linux Kernel mailing list, asking: would you ask the list about a bump on your elbow?
KroahHartman explained how closed source modules included in Linux distributions cause problems and prevent progress, holding back the kernel included with the distributions. Closed source Linux kernel modules are unworkable, he said.
Companies that have intellectual property they say they want to protect, he said, should not use Linux.
When you use Linux, you should follow its rules. You are saying that your IP, he said, trumps the entire Linux community, that you are more important than everyone else. Closed source Linux kernel modules are unethical, he said.
He suggested to companies that they read the kernel headers for who owns the copyright on various parts of Linux. He noted that you will find companies like AMD, Intel, and IBM represented there. Do you really want to skirt under these companies' lawyers?
Novell, on February 9th, 2006 said in an official policy statement that it will no longer ship nonGPL kernel modules. It is as good a statement as is possible for any such company, he said, noting the lack of reference to the fundamental legal reason for avoiding it.
Someone shouted out asking about Nvidia. Nvidia, he said, and ATI, and VMWare all violate the GPL. But they do it cleverly. They write their code against the kernel source, but they don't link it, the part that violates the GPL. They force the end use to do that, preventing them from redistributing their builds.
VMWare, he commented, is not open source.
He moved on to the next myth: it is hard to get code into the main kernel tree.
If there are 6000 changes per release, someone is getting it in, he said. All that you need to do is read the Documentation/HOWTO file in the kernel tree and know what you are doing.
There are a number of ways to start working on the kernel, he explained. The first and easiest is to check out the Kernel Newbies project. It is available as a wiki and webpage at kernelnewbies.org. The second way, he said is to join the Kernel Newbies mailing list. He said it is virtually impossible to ask a stupid question on that list. Just read the recent archives so you don't ask a question that has just been asked. The third and final way is to get on the Kernel Newbies IRC channel. He said there are around 300 users there and the channel is usually quiet, but not to worry, people will generally answer your questions.
But when seeking help, he cautioned, be prepared to show your code. People are not inclined to help people who are working on closed source code.
The next step up from Kernel Newbies is the Kernel Janitors project. This is a list of things that need doing. Check the list, see if you can knock some items of it. Getting a kernel patch accepted is a good feeling, he said. A lot of people started this way.
The next step up is to join the Linux Kernel Mailing List, which has around 400 to 500 messages a day. Don't feel bad about not reading every message, he said, everyone filters. The only person in the world, he said, who reads all of the messages is Andrew Morton. Subscribe, and ask questions there, he suggested.
There are not very many people, he admitted, who review the code that comes in. But when someone reviews your code and gives you feedback, they are right. They are not the bad people, he warned, the people submitting the bad code are. KroahHartman said he tried it for a week. He said it made him grumpy.
He suggested that people wishing to contribute should spend a few hours a week reading existing code.
You must learn to read music, he analogized, before you can write it.
If you can't contribute by writing code, what can you do, he asked.
It is not possible to do comprehensive regression tests on the Linux kernel, he said. You cannot test what happens if you add this device and remove that device in this order over this time. It just doesn't work.
The best test, he said, "is all of you". Test Linus' nightly snapshots, he urged.
If you find a problem, post it to bugzilla.kernel.org. Then bug him until he feels bad, so that it gets fixed.
Also, he said, try and test the mm kernel tree.
In conclusion, he said, Linux supports more devices than anyone else. Linux progresses by evolution, not design. Closed source drivers are illegal. Linux can use help with reviews and testing.
And most importantly, he finished, total world domination is proceeding as planned.
With that, he moved on to taking questions.
The first question stated that the timeliness of device support is as important as the number of devices supported.
He responded that in order for drivers to come out quickly, it is important for hardware vendors to get involved.
The next person up to the microphone commented that if someone wants to learn how something works in the kernel, write a design document for it.
KroahHartman responded that design documents need to go directly into the source code as it is the only way they will be kept up to date. There is a movement afoot, he said, to get OSDL to hire a full time documentation guy. He suggested that anyone present who works for a member of OSDL suggest to their employers that they contribute funds for this purpose.
Alan Cox came up to the mike to ask a question. Is Microsoft the Borg as the media likes to portray it, he asked, or is it really Linux that is the Borg?
KroahHartman answered that Microsoft is buckling under its own load. By size, Microsoft is the Borg, but by function, it is Linux. But to him, he said, it is not a matter of us versus them. He doesn't mind the competition.
To sum up
Following the Greg KroahHartman's keynote address, the annual doorprize draw took place. This year, the CELinux forum offered a development platform Linux PVR from Philips, and 3 Linuxbased Nokia 7700s.
Red Hat contributed 2 laptop bags and 3 red hats to the draw, and IBM threw in a couple of loaded Apple Powermac G5s. The hats were distributed by Alan Cox, with Greg KroahHartman's young daughter picking the numbers.
Conference coorganizer Craig Ross closed out the formal part of the conference with a series of announcements.
The first was that a US passport had been found, and would the owner please come forward to claim it.
Noone did, but it led to lots of laughter.
Ross announced next year's keynote speaker will be SCSI maintainer James Bottomley, without naming him. He held up a bowtie and suggested that all attendees should wear one next year, as the charismatic Bottomley is known for, in his honor.
Ross continued, reminding all attendees that the closing reception at a nearby pub can only be entered with the help of official conference id. Guests and spouses can come if they have an event pass, available through registration, Ross said. Girlfriends, wives, and family met along the way between the conference center and the pub would not be welcome. People may not sleep in the fountain at the pub, and should not attempt to climb the fake balcony inside the pub. If you do misbehave in public, he grinned, at least take off your conference id!
During closing announcements, conference organizer Craig Ross asked the assembled crowd how many were attending OLS for the first time. The number of hands raised in response to this question was quite low, certainly well under a quarter of attendees. It was the fourth OLS that I attended and I am eagerly anticipating next year's.
As for who came to the conference, contrary to what one might expect at an event called Ottawa Linux Symposium, the number of longhaired, unshaven, sandalwearing geeks is actually very low, though the number of people meeting any one of these criteria on its own is somewhat higher. Judging by the name tags, the number of people attending on their own tab as opposed to being sponsored by their employer is quite low. Most people, though certainly not all, have a company name on their tags. What conclusions can be drawn from this I leave to you to decide.
This year's OLS saw approximately 128 sessions, BOFs, and formalized events, up sharply from 96 a year ago. Sessions started at 10am and, except for a break for lunch, went on until 9 pm most days split between 4 session halls. I took 43 pages of handwritten notes, up one from last year, from attending 26 sessions, up by three over last year. Of these, I covered a mere 12 in these summaries. Once again, I hope you enjoyed this taste of OLS and I hope to see you there next year!
Originally posted to Linux.com 2006-07-23; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 01:29 on
July 23, 2006
Day 3 at OLS: NFS, USB, AppArmor, and the Linux Standard Base
The third of four days of the eighth Ottawa Linux Symposium saw a deep discussion on the relative merits of various network file systems in a talk called "Why NFS sucks", a tutorial on reverse engineering a USB device, an introduction to SELinux rival AppArmor, and an update on the status of the Linux Standard Base, among other topics of interest
Why NFS sucks
Olaf Kirch gave his talk entitled "Why NFS sucks", following a pattern of talks entitled "Why _ sucks" at this year's OLS, on the topic of NFS and its many less successful rivals.
He started by commenting that it was really a talk about NFS and what a wonderful filesystem it is. He meant it just as seriously as he the original title of the talk.
Everybody complains about NFS, Kirch stated. To prove his point, he asked the audience if anyone thinks NFS is good. Three people raised their hands in an audience of more than a hundred. The SUSE Linux distribution's bugzilla had "NFS sucks" as a catchall bug for gripes for a while, he commented, though it was recently removed.
In the early 1980s, Kirch stated getting a little more serious as he began discussing the history of NFS, Sun had a limited network filesystem called RFS. In 1985, Sun released NFS version 2 along with SunOS 2, with no sign of an NFS version 1. In 1986, Carnegie Melon University and IBM created AFS.1988 saw the creation of Spitely NFS, which was NFS version 2 with cache consistency. It was another six years before the next major development on the timeline. In 1994, crash recovery was introduced for Spitely NFS, and that same year Rick Macklem released Not Quite NFS (NQNFS) along with 4.4BSD. In 1995, NFS version 3 was released as, as Kirch put it, general wart removal. In 1997, Sun released WebNFS, intended to be as big as HTTP, but it didn't even fizzle. In 2002, NFS version 4, the 'Internet filesystem' was released.
Kirch went on to explain the basics of NFS version 2. NFSv2 is a stateless protocol. This allows either party to carry on as if nothing happened after a crash and reboot or restart. If an NFS server crashes, the client just has to wait until the server comes back up, and then it can continue as it was. If it were stateful, every client would need a state recorded and tracked by the server. A stateless protocol scales better. NFS can export almost any filesystem as a network filesystem. It is an important strength of NFS. It is not filesystem specific.
Files need a file handle that is valid for the entire life of the file, Kirch stated. This works well with inode tables, but new filesystems are more complicated. Directories can reconstruct a chain of entries using the parent directory (..) entries. Files are pointers to inodes and directories. With NFS, these ids can change.
NFS listens on port 2049. It needs to talk to mountd to get the file handle to mount a directory, portmap to get a port to connect to, another protocol to perform file locking, another to recover from a failure in a stateless state, another to recover locks after crashes... Kirch expressed some exasperation with an old NFS attitude from versions prior to four that each new feature requires its own protocol. Version four, he noted, mostly gets it right.
NFS version 2, Kirch commented, is notorious for having its implementation details passed on primarily by oral tradition rather than meaningful specs. He described attribute problems that can result in client/server confusion because of different common implementations.
Renaming or deleting an open file should allow continued writing of that file. Over NFS versions 2 and 3, removing or renaming a file can have, as Kirch put it, interesting results. In NFS version 4, this is solved with "silly rename" which turns the removed file into a dotfile (.nfs.xxxxxx), though this file can also be deleted. The dotfile is then only removed once nothing has it open any more.
NFS versions 2 and 3 cannot handle simultaneous access to a file properly, he cautioned. The results can be gabled. NFS version 4 also has the problem, but will give an error message warning that there could be trouble.
Another problem inherent in NFS is the lack of file security. The client machine tells the server the user and group ids of the user trying to access a file on the server, and the server agreeably goes along with the information, trusting the client fully. A number of workarounds have been proposed and implemented over time, but none have really caught on.
NFS also has the nasty habit of saturating networks. Prior to version 4, NFS was entirely a user datagram protocol (UDP) based protocol. This is a lossy protocol that can overwhelm a network if it gets too busy.
Some kind of congestion avoidance was needed, Kirch concluded. It needs to be smarter about retransmission. The solution he offered is TCP, which NFS version 4 now uses exclusively. TCP is a stateful network protocol that ensures packets reach their destination and retransmits only if the packets were lost.
Kirch noted that there are a variety of alternatives to NFS, and summarized it as picking your poison. He listed a number of the alternatives, a long with brief descriptions of them and then a more detailed list with their strengths and their flaws:
IBM open sourced AFS rather than continuing to maintain it as an endoflife solution for it.
DFS came from the Open Group and is either dying or is altogether dead.
CIFS is a surprisingly healthy network file system.
Intermezzo was nicely designed, but went away.
Coda was written by Peter Braams, who subsequently moved on to another project. It's also kind of dead.
Cluster filesystems exist, Kirch noted, but generally live on top of either NFS or CIFS.
NFS with extensions, called pNFS,stores files and metadata on separate servers.
Kirch, having listed them, got a little more in depth about a few of them.
AFS he called "Antiques For Sale" and said the filesystem is in maintenance mode. It relies on Kerberos 4 for security. The code itself is difficult to read, being a mass of #ifdef statements used to make it portable across multiple platforms. It is not interoperable, and cannot function on 64bit platforms.
CIFS he called the "Cannot Interoperate File System". It is a stateful, connection based network file system. He described the protocol as a jungle, saying he couldn't speak about it any further because it is just "horrible". Its biggest problem, he noted, is it is controlled by Microsoft and that is its main barrier to adoption. Users want to know that it will still be there tomorrow, he added.
NFS version 4 he described as "Now Fully Satisfactory?" It's an Internetoriented filesystem that has got a lot of things right. It interoperates with Windows, is on a single, firewallfriendly port (2049), and a flaw in callback code that opened another port has even been fixed in version 4.1. It is entirely TCP, with UDP now a thing of the past.
Basics on reverse engineering a USB device
I attended one tutorial session: Reverse engineering USB drivers for compatibility, by F/OSS consultant Eric Preston.
He began with a standard disclaimer "This is for educational purposes only."
The premise is simple: USB devices often lack vendor support. The vendors don't care about Linux, and their excuses range from nobody uses Linux to USBIDs are intellectual property to who cares about USB, anyway, Linux isn't on the desktop.
What do we do, Preston asked We can wait for support from the device vendors or the community at large, or we can do it ourselves.
The mission, therefore, Preston stated, is to figure out how existing drivers work in order to write drivers ourselves. The goal is to support cool hardware, get more people involved in writing userspace drivers, and remove barriers for less experienced developers; make driver writing fun and less tedious.
The tools needed to reverse engineer a USB device, Preston explained, are, primarily, usbsnoopy and Windows. Using Windows, where most drivers are, and usbsnoopy, it is possible to see the interaction of packets between the USB device and the device driver in the operating system. It creates a log which can then be decoded into the functions.
To figure out what is what, simple tasks can be performed in Windows on the USB device and the interaction monitored and logged. Then the USB specification can be consulted and the log can be manually decoded, eventually, after months of work, resulting in some idea of what is happening.
With the help of VMWare or other virtualization programs, the painfully frequent reboots involved in the process can be avoided and Linux tools can be used in place of usbsnoopy, including one using a Linux program called usbmon in combination with Linux network snooper ethereal to monitor USB device traffic with the ethereal interface called ethereal dissector. Preston is writing it, but warned that the code is very messy and is not something that is quite ready to yet be shared.
The drivers themselves can be written with the help of libusb entirely in userspace. With the advent of libusb, it is not longer necessary to write kernel drivers to run USB devices. Preston did not actually write a driver in the tutorial, but did show attendees in the beyondcapacity packed room the path to do so.
AppArmor vs SELinux
Among the interesting BOF sessions of the evening was one called The State of Linux Security, led by Doc Shankar of IBM.
He invited several security experts to give brief updates on their security projects, largely concentrated around SELinux, whose esoteric nature is completely over my head. But one brief presentation particularly caught my attention.
Crispin Cowan of Novell presented recent Novell acquisition Immunix' AppArmor Linux security suite which appears to be an alternative to SELinux.
Its simplicity and logic led me to wonder why it was I had never heard of it before. The long and the short of it is it is a security tool that restricts access to services and applications only to the privileges, including specific root privileges, and files it needs to perform its duties and it's capable of learning what those are without being explicitly told by watching the programs to be defended perform their tasks and logging what they do. Cowan did a brief demonstration, showing how Apache could be tied down with AppArmor in just a couple of minutes, preventing a root hole in a sample web page from being exploitable by virtue of not allowing the resources needed to exploit it.
How can you beat that?
Update on the Linux Standard Base
The last session I attended on the third day was the obligatory annual Linux Standard Base update, presented as a BOF by Mats Wichmann.
Since the last OLS, Wichmann says that the Linux Standard Base version 3.1 has been released in two parts. The first part, the LSB core, was released in November of 2005, with the second part, the modules, being released in April of 2006. It was split into two to allow it to meet International Standards Organization (ISO) deadlines to become an ISO specification.
As a result of the ISO involvement, there are now two LSB streams. One is a relatively frequently updated version administered by the Linux Standard Base project itself, the other is the ISO specification. The two specifications are essentially identical.
The ISO specification exists mainly to allow governments to specify it as an ISO standards compliance when releasing contract tenders for technology, which would allow Linux Standard Base as a requirement. ISO standard 23660 provides this.
The Linux Standard Base documentation is released under the Free Documentation License, but for the ISO, it is effectively duallicensed documentation to allow the ISO to retain it as an official standard under their direction.
Asked how hard it is to keep the ISO version of the LSB standard up to date, Wichmann replied that it is a concern. The specifics of the specification cannot be changed all the time, even though the LSB project itself is evolving. The ISO specification can be kept up to date with occasional errata report filings, but the update cycle with the ISO is approximately 18 months. As a result, the ISO spec will inevitably lag behind the LSB specifications.
The next question asked who gets certified with the LSB. Wichmann answered that any company that has an economic interest in certifying its distribution or software package will do it, if there is a return. In theory, anyone can get any software certified, he noted, and there is no reason that companies cannot keep their software compliant even if they don't go through the process of actually being certified. Questions on how conformance is verified and how long it take to do were asked. It's a self test, Wichmann admitted. Labs are too expensive, but tools are available for anyone to download and run against the software they would like to check for compliance. If there are no errors, the tests can easily be completed in a single day. If there are errors, naturally it will take longer. To become certified, the logs of the tests need to be submitted.
It was noted during the session that the Linux Standard Base's role is more or less passive. It does not mandate standards that are not generally already the norm. Its mandate is to document, not to push, even if better systems exist than the ones that are in use.
The last day of the conference promises to be exciting, with Greg KroahHartman's keynote address. Stay tuned!
Originally posted to Linux.com 2006-07-22; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 18:51 on
July 22, 2006
Day two at OLS: Why userspace sucks, and more
OTTAWA Day two of the eighth annual Ottawa Linux Symposium (OLS) was more technical than the first. Of the talks, the discussions on the effects of filesystem fragmentation, using Linux to bridge the digital divide, and using Linux on laptops particularly caught my attention, but Dave Jones' talk titled "Why Userspace Sucks" really stole the show.
The first of these talks, "The Effects of Filesystem Fragmentation," was led by Ard Biesheuvel, a research scientist who works on Personal Video Recorders (PVR) in the Storage Systems & Applications group of Philips Research. Biesheuvel explained that a PVR operates by recording a television signal to a box, and employes metadata to describe what is available. It has some degree of autonomy in what it does, and does not, record by creating a profile of what the user likes to watch, or recording something that a friend's PVR is recording. It records a lot, and it can often record more than one TV show at a time.
With the PVR explained as the demonstration platform, Biesheuval's talk carried on to filesystem fragmentation. Biesheuval says that the theory is that fragmentation is generally expressed as a percentage, but a percentage is not clear. A new metric must be created for determining the impact of filesystem fragmentation. A useful metric is relative speed.
Biesheuvel showed a slide of a diagram of a hard drive platter. It showed how data is stored on tracks rings of data around the platter and each track is offset from the next by an amount appropriate for allowing the disk head to leave one track and get to the next, arriving at the right point to continue.
A gap, he explained, is the space between segments of a file not belonging to the file. Fragments are the noncontiguous pieces of the same file. Hard drives generally handle small gaps by reading through the data on the same track through the gap, while on larger gaps the drive head will seek (travel) to the track of the next fragment and then read it. Ideally, he says, there will be one seek and one rotation of the drive per track of data belonging to the file being read.
With the background explained, he described the tools for his tests. The first, called pvrsim, operates by simulating a PVR. It writes files between 500MB and 5GB in size to disk, two at a time, endlessly emulating the lifecycle of a PVR. It deletes recordings as space is needed for new ones by a weighted popularity system.
The next tool is called hddfragchk, which is not yet available for download, but Biesheuval says it will be made available eventually. The hddfragchk utility shows the hard drive as a diagram of tracks with the data from each file assigned a color. He demonstrated animated GIFs of hddfragchk in operation, showing the progression of the filesystem fragmentation as pvrsim runs.
The first filesystem was XFS, which showed clear color lines with small amounts of fragmentation visible as the files moved around the disk in the highly accelerated animation. The other filesystem he showed was NTFS, which resembled static as you might see on a television that is not receiving signal, as the filesystem allocated blocks wherever it could find room without much apparent planning.
Biesheuvel then went on to show a graph showing an assortment of filesystems and their speed of writing over time. All filesystems showed a decline over time, with some being worse than others, though I did not manage to scribble down the list of which was which.
Relative speed is highly filesystem dependant, he concluded. Filesystems should maintain the design principle that a single data stream should stick to its own extent, while multiple data streams must each be separately assigned their own extents.
Extents were not explicitly explained during the talk, it can be deduced from the discussion that they are sections of the filesystem preallocated to a file. He expressed optimal hard drive fragmentation performance mathematically, and stated that equilibrium is achieved when as many fragments are removed as are created.
Biesheuval also says that there is a sweet spot in fragmentation prevention with a minimum guarantee of five percent free space. At five percent free space, fragmentation is reduced. Ultimately, he says, relative speed is a useful measure of filesystem fragmentation. The worst filesystem performers do not drop below 60% of optimal speed.
Why userspace sucks
Dave Jones, maintainer of the Fedora kernel, gave his "Why Userspace Sucks (Or, 101 Really Dumb Things Your App Shouldn't Do)" talk in the afternoon for a standingroom only crowd. Jones' talk focused on his efforts at reducing the boot time in Fedora Core 5 (FC5), and the shocking discoveries he made along the way.
He started his work by patching the kernel to print a record of all file accesses to a log to look for waste.
He found that, on boot, FC5 was touching 79,000 files and opening 26,000 of them. On shutdown, 23,000 files were touched, of which 7,000 were opened.
The Hardware Abstraction Layer (HAL) tracks hardware being added and removed from the system, to allow desktop apps to locate and use hardware. Jones says that HAL takes the approach "if it's a file, I'll open it." HAL opened and reread some XML files as many as 54 times, he found. CUPS, the printer daemon, performed 2,500 stat() calls and opened 500 files on startup, as it checked for every printer known to man.
X.org also goes overboard, according to Jones. Jones showed that X.org scans through the PCI devices in order of all potential addresses, followed by seemingly random addresses for additional PCI devices, before starting over and giving up. He paid special attention to X fonts, noting that he found that X was opening a large number of TrueType fonts on his test system.
To see what it was up to, he installed 6,000 TrueType fonts. Gnomesession, he found, touched just shy of 2,500 of them, and opened 2,434 fonts. Metacity opened 238, and the task bar manager opened 349. Even the sound mixer opened 860 fonts. The X font server, he found, was rebuilding its cache by loading every font on the system. He described the font problems as bizarre.
The next aspect of his problem identification was timers. The kernel sucks too, he said: USB fires a timer every 256 milliseconds, for example. The keyboard and mouse ports are also polled regularly, to allow support for hotpluggable PS/2 keyboards and mice. And the little flashing cursor in the console? Yes, its timer doesn't stop when X is running, so the little console cursor will continue to flash, wasting a few more CPU cycles.
Jones says that you don't need the patched kernel and tools that he used to do the tests. Using strace, ltrace, and Valgrind is plenty to do the work to get rid of waste, says Jones.
An audience member asked, after fixing all these little issues, how much time is saved? Jones replied that roughly half the time wasted by unnecessary file access was saved. However, the time saved is taken up by new features and applications that also consume system resources. As a result, says Jones, it is necessary to do this kind of extensive testing regularly.
Another attendee asked, how can we avoid these problems on an ongoing basis? One suggestion is to have users who don't program, but wish to be involved in improving Linux, take on the testing work. The last question of the questionandpeanut gallery answer session at the end of the talk asked if KDE was as bad as GNOME in these tests. Jones replied that he had not tried.
As the Q&A continued, the session became more of a Birds of a Feather (BoF) than a presentation. The backandforth between Jones and the audience had most of the packed room in stitches most of the way through.
Bridging the digital divide
In the evening, I attended a BoF session run by David Hellier, a research engineer at the Australian Commonwealth Scientific and Research Organization (CSIRO) on the topic of bridging the digital divide. His essay on the topic won him an IBM T60 a day earlier.
Hellier says he would like to use Linux and Open Source to help bring education to the millions of extremely poor people throughout the world. In Africa alone, 44 million primary aged children cannot get a basic education.
A participant mentioned that there are 347 languages in the world which more than a million people speak, not all of which have translations of software, though some even smaller ones have translated versions of Linux. Another person pointed out that translating an operating system and applications is only part of the battle. The important part is translating the general knowledge associated with it. Tools that are translated must also be available off line. Remote, poor communities are unlikely to have much in the way of Internet access even if they are lucky enough to have electricity.
Linux developers, Hellier says, are largely employed by big companies. As such, they are in a position to suggest ways to get their companies to help close this digital divide.
How is it different from missionary work, one person asked, to send people with these unfamiliar tools to the depths of the developing world? Hellier responded that the key difference is that governments all over the world are screaming for all the help they can get.
Major software companies are going to the developing world to evangelize their wares, however, and it is important to counteract this effect. The ultimate goal is to help people help themselves, noted Hellier.
The discussion moved on to ask how to address this topic on a more regular basis than at conferences once every year or two in a BoF session. Hellier started a wiki for discussion on bridging the digital divide prior to the start of the session at olsdigitaldivide.wikispaces.com and it was suggested that an IRC channel be created for further discussion, a method, noted an audience member, used successfully by kernel developers for years; so an IRC channel, #digitaldivide was created on irc.oftc.net.
Hellier also recommended looking at a number of tools, including the Learning Activity Management System, Moodle, and the sysadminfree usability of Edubuntu.
Linux on the laptop
The last session I attended yesterday was the BoF session run by Patrick Mochel of Intel on the topic of Linux on the laptop. It was an open BoF with no specific agenda and no slides. Mochel noted the presence of several relevant people to the discussion, including some developers of HAL, udev, the kernel, ACPI, and Bluetooth.
The discussion began with talk about suspend and resume support on recent laptops and the weaknesses therein. Mochel noted that while suspend and resume support is a nice thing, it does not buy you anything with the most critical aspect of a laptop battery life. This brought about a lengthly discussion of various things that waste electricity in a laptop. The sound device, for example, should be disabled when it is not being actively written to and network devices that are not being used should be disabled to conserve power.
The discussion evolved quickly, turning next to network states. It is possible, argued Mochel, to have the network device down until a cable is plugged into it, in the case of wired networking, and only come up when a cableconnected interrupt is received. This can be important because a network card that is on is wasting power if it is not connected to a network.
Removing a kernel module does not necessarily reduce power to a device, someone noted. Fedora only removes modules when suspend cannot be achieved without doing so, commented another.
Another participant asked whether there's any documentation on how drivers should work with regards to power management? The answers were less than straightforward, with one person asking if there's documentation on how drivers should work for anything at all. Another suggested posting a patch to the Linux kernel mailing list and seeing the reaction.
The topic of tablet PCs and rotating touch screens was brought up. Touch screen support has been improving over the last few years, it was noted, but mainly in userland. Someone commented that the orientation of the rotating monitors on tablets are determined by differential altimeters sensing air pressure differences between the ends and determining orientation as a result.
Rotating screens are not only a problem for X, says Linux International's Jon 'maddog' Hall, but for consoles as well. Pavel Machek replied that 2.6.16 and newer kernels allow command line tools to rotate the console.
The discussion then moved into a discussion of biometrics in light of the finger print scanner present on many newer IBM laptops. Microsoft, came a comment, is pushing for a biometric API in its next version of Windows. A biometric API exists for Linux, and sort of works. It supports the fingerprint scanner by comparing the image taken by the scanner to ones stored, a solution noted by others present to be less than secure since the image is not hashed something that has been done for user passwords on Linux for years.
The second of four days of the conference saw more technical talks than the first, with Dave Jones' talk on userspace being the highlight of the day.
Originally posted to Linux.com 2006-07-21; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 18:47 on
July 21, 2006
First day at the Ottawa Linux Symposium
OTTAWA The 8th annual Ottawa Linux Symposium (OLS) kicked off Wednesday in Ottawa, Canada at the Ottawa Congress Centre. Jonathan Corbet, cofounder of Linux Weekly News, opened the symposium with The Kernel Report, an update on the state of the kernel since last year.
Corbet started his talk with a brief recap of the Linux kernel development process. According to Corbet, Linux kernels are now on a two to threemonth release cycle. The current Linux kernel version is 18.104.22.168, with 22.214.171.124 expected shortly. All 2.6.x kernels are major releases, with 2.6.x.y kernels being bugfix releases.
Corbet says that there will not be a 2.7 kernel tree for the foreseeable future, not until there is a major, earthshattering change that will break everything and thereby require an unstable kernel tree.
The major release cycle developers use now takes approximately 8 weeks. In week 0, new features are included in the kernel in what is termed the merge window. This is typically in the form of several thousand kernel patches. This process ends when Linus decides there has been enough and the merge window is decreed closed.
The kernel then goes into release candidate mode, with effort going into stabilization and bugfixing.
Release candidate (rc) kernels are released periodically and by the theoretical 8th week (which usually is a bit later), a major release is released. Subsequently, all bugfixes and patches to that kernel come in the form of 2.6.x.y version numbers.
The process of merge windows started a year ago, Corbet said, and the result has been the relative predictability of stable kernel releases. New features come out quickly instead of spending years in queue; Distributions are keeping up with more current kernels than they had been.
Corbet showed a graph of kernel patches over time, showing how the number of patches going into the kernel has changed from a more or less straight line to a stair case pattern, with the help of the merge window release cycle now in use.
The quality of the new kernel release cycle has most people happy, Corbet said emphasizing "most." The perception among some, he said, is that the quality of the kernels is on decline with too much emphasis on few features, and more bugs going in than coming out.
Corbet says that there's not a firm kernel bug count. As the number of users increases, he noted, so to does the number of bug reports. More code means more bugs, even if the proportion of bugs (bugs per thousand lines of code) drops.
Many bugs being fixed are very old bugs, Corbet says. Of two recent security fixes, one was for a one year old security flaw, and the second was for a three year old security flaw. Fewer bugs, and a single bug database to centralize kernel flaws, would be nice to have; and Corbet says that he expects that progress on that front is on the way. Corbet also pointed out that bug tracking isn't very helpful if the bugs don't get fixed.
Kernel developers often lack the hardware needed to fix bugs, and so the bugfix process can require extensive back and forth exchanges of tests and results. This process is very slow, and often times one party or the other gets bored of the process and the bug remains in the kernel.
Another problem, Corbet says, is that there is no boss to direct bug fixing efforts unless there is a corporate interest in fixing a bug somewhere and that company puts the resources in to getting specific bugs fixed. Kernel developers are also often reluctant to leave their little corner of the kernel, he noted.
Introducing bugs in the first place is becoming harder, said Corbet. Better APIs and more use of automated bugcatching tools are improving the situation. It has also been suggested that the Linux kernel do major releases that are strictly about fixing bugs, not adding features. Another suggestion floating around is the assignment of a kernel bugmaster. It would need to be a funded position, he noted.
Future kernel development
Corbet went on to summarize the major changes in the kernel since this time last year, when kernel 2.6.12 was current. Among other specifics about the kernels released since, Corbet noted that Linux kernel 2.6.15 was released January 2nd, 2006, 15 years to the day after Linus bought his first development box to begin work on the kernel.
The kernel has a 15year history, but it doesn't have a fiveyear roadmap. Corbet says that the kernel has no specific timetable for features, or even a specific list of features that will be implemented, and that there's no way to force development of any particular feature without specific funding. No one knows what hardware will be out down the road, or what users will want. What future we can predict, though, is the next kernel release.
The kernel 2.6.18 merge window has closed, Corbet says, and a number of changes will be in the upcoming release, including a new core time subsystem, and a massive patch set for serial ATA including error handling, and a kernel lock validator. The latter of these changes is designed to help with kernel development. Locks are designed to keep threads apart, he explained, and they're difficult to get right. He also noted that devfs would be removed from the kernel in 2.6.18, which generated widespread applause.
Corbet went on to discuss challenges with integrating virtualization support with the kernel, noting that the various virtualization programs should not need to maintain their own trees and need to come up with a uniform set of patches into the kernel, to avoid each having its own set. He also spent some time discussion kernel security in the form of SELinux, which he said is acquiring real administrative tools, and AppArmor, SELinux competition recently purchased by Novell.
The Linux kernel is very unlikely to switch to GPL version 3, Corbet noted at the end of his excellent 45 minute talk, as changing the license would require a consensus of all kernel developers, who still individually hold the copyrights on their little bits of the code. This would not be helped, he noted candidly, by the fact that some are dead.
The slides from Corbet's talk are available here.
Fully automated testing
Later in the day, I attended a talk by Google's Martin J. Bligh entitled "Fully Automated Testing." Bligh started by asking: Why? Automated testing, he says, is not just necessary because testing is a boring occupation. With the kernel 2.6 tree's new development cycle, the rate of change of the kernel is quite scary. Linux is very widely used now, and old methods of bug reporting are no longer adequate.
It used to be that kernels were pushed out, and the developers could wait for feedback from users. Those days are over. Bligh noted that machines are cheap, compared to people, and automated testers don't disappear. Is automated testing the solution to world peace and hunger? No, but Bligh says it's part of a solution to kernel bugs. Automated bug testing requires more coders and more regression testing.
The testing is done upstream, he says, so the testing can be done prior to releases instead of after them.
The fewer users exposed to a bug, the less pain caused. New code in the kernel can be pushed out of the tree until it's fixed without causing additional problems, if bugs are found, before other features depend on the new code. The earlier bugs can be found, Bligh says, the better.
Bligh noted that extensive automated testing is done on the kernel twice a day. The test system is written in Python, and he discussed at length why Python was chosen as the language for the system. He also spent some time showing the audience test output from the system, and discussed why other languages were not suitable for the test system.
He described Python as a language that meets the requirements for the task because it is a language that is easy to modify and maintain, that is not writeonly, that has exception handling, is powerful but not necessarily fast, is easy to learn, and has wide libraries of modules to leverage.
He described Perl as a writeonly language, and said that while people can do amazing things with the shell, it is not appropriate for the purpose. He said with a grin that he has a lot of respect for what people can do in the shell, but none for choosing to do it that way in the first place.
One thing he particularly likes about Python, he said, is its usage of indentation. Unlike other languages, Bligh noted, Python is read by the computer the same way as it is read by a person, resulting in fewer bugs.
Bligh says test.kernel.org is better than it was before, but it is a tiny fraction of what could be done.
Blight says that kernel testing would be improved by an open, pluginable client to share tests. He also called for more upstream testing, and for companies to get involved in testing.
Is it cheaper, Bligh asked, for a company to debug code itself or to help track down bugs for the community to fix before it affects the company? He described the current automated test efforts as the tip of the iceberg. He also encouraged attendees to get involved by downloading the test harness and reading the wiki to get started.
Len Brown of the Intel Open Source Technology Centre, and maintainer of the kernel ACPI subsystem, led the next session, "Linux Laptop Battery Life". By their nature, Brown says, laptops are the source of most innovation in the area of power management.
The first part of his talk centered around how to measure how much power a laptop is using in the first place as a baseline. The first method he suggested is to use an AC watt meter with the battery out of the laptop, if possible. It's a $100 test, he said, but fails on the AC to DC power conversion and on the fact that most laptops are aware that they are plugged into AC, and therefore unlimited, power and disable some power saving measures used when a battery is active.
The second method is fundamentally the same, but with a DC watt meter to eliminate the power loss caused by the power brick from the math. A more expensive but somewhat more accurate method, he said, is to set up a DC input system through the laptop battery leads.
The simplest solution though, he pointed out, is to simply use the laptop's builtin information about the battery. Simply run a fully charged battery to fully depleted and see how long it takes. Compare that to the wattage of the battery and you have your power usage. For example, he said, a 53 watthour battery that runs a laptop for one hour means the laptop is running at 53 watts. If it lasts two hours, it is only using 26.5 watts, and so forth.
He announced the release of a GPLed program he has been working on which, he emphasised, does not do benchmarks, called the Linux Battery Life Tool Kit. The code is available on his directory on kernel.org.
The first test of the testing program, he says, is the idle test. Idling is a most basic function of a laptop. It even idles between key presses on the keyboard, he noted. The next test he said is a Web read test, which looks at a different Web page every two minutes. He described it as an idle test with eye candy and said the results are indistinguishable from idle.
The next test is an OpenOffice.orgbased test that specifically requires version 1.1.4 of OpenOffice.org.
The next two tests are DVD playing with Mplayer and a software developer workload test, consisting of browsing and compiling source code.
He gave a number of examples based on his specific laptop, which he indicated vary widely from one laptop to the next, showing the results of his power usage tests under different circumstances.
The results gave both power usage and performance figures for the different circumstances tested.
Among the statistics demonstrated was performance and power usage statistics for his dualcore laptop running with one core enabled versus disabled, and what effect a 5400 RPM hard drive had versus a 7200 RPM hard drive.
The speed of the hard drive had a huge performance boost for software development, but not noticeably anywhere else, and cost only a small penalty in power usage. Enabling the second core also cost little in extra power, but provides a significant performance boost. The biggest difference, he noted, was in LCD brightness. From brightest to dimmest setting on his LCD, the difference on his laptop's battery life was more than 25%.
He also compared Windows XP to Linux performance on his laptop, noting again that performance differences were different from one laptop to the next. On his laptop, DVD playing was noticeably more power efficient in Windows than in Linux. He credited this to WinDVD buffering the DVD instead of just reading it constantly as Mplayer did.
The day was capped off by a reception by Intel, at which there were no speakers, but the winners of an Intelsponsored essay competition about Linux or open source were announced. The winner, David Hellier, received a very nice laptop for an essay on bridging the digital divide. The runner up received an iPod for his essay, "I can."
Originally posted to Linux.com 2006-07-20; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 18:43 on
July 20, 2006
PostgreSQL Anniversary Summit a success
This weekend marked the 10th anniversary of PostgreSQL's posting as a public, open source project. To celebrate, the PostgreSQL project held a twoday conference at Ryerson University in downtown Toronto, Ontario, Canada.
The conference started with a keynote address by Bruce Momjian, one of the longestserving and best known developers of the project, discussing why the conference is taking place, a bit of the history of PostgreSQL, and the future. Momjian started off his talk by announcing to laughs that the PostgreSQL patch queue is empty.
Momjian called his role at PostgreSQL a tremendous honor, and says he does not know what the next ten years would bring for the project. He did predict that tools like PostgreSQL would become more popular.
Great days, Momjian philosophized, rarely announce themselves.
Weddings and graduations come with dates and invitations, but most other significant events just happen. Along the same lines, he noted that open source developers evolve into their developer roles. Many start with submitting a patch during a few hours free time. The contributions snowball and they eventually find themselves with full time employment as a result of their contributions.
PostgreSQL started in the 1980s at the University of California, Berkeley, though most of the people from the era went on to get "regular jobs", says Momjian. In April of 1996, Marc Fournier sent an email to the postgres95 mailing list noting a number of major flaws with the software. Momjian described the state of development as being in maintenance mode. Fournier suggested in his email that, given time and room, it could become a useful project.
The discussion evolved and Fournier offered to host a development server for the project, allowing it to escape from Berkeley and become a modern open source project. Fournier noted in the discussion at the time that Postgres would need to move forward with the help of a few contributors with a lot of time. He commented that a lot of contributors with a little bit of time would not be equivalent.
Fournier's offer to host a CVSUP server came on July 8th, 1996, which the date of the conference commemorates. Pretty soon, work began toward an actual release, allowing the project to graduate out of maintenance mode.
Momjian went on to show the evolution of the PostgreSQL's Web site since 1997, from a comical logo showing an elephant smashing a brick wall to the current professional image of the organization.
Momjian showed a map of the world with markers everywhere he had been representing PostgreSQL, covering much of North America, Europe, and Asia, and commenting that he would soon be adding India and Pakistan to his list of countries. He concluded his keynote with what he termed a show and tell, showing CDs distributed at several points over the history of the project, as well as a Japanese PostgreSQL manual.
Following the keynote, Andy Astor, CEO of EnterpriseDB, got up to make a brief announcement, saying the company has grown to around 100 employees and is based entirely on PostgreSQL. "Thank you PostgreSQL," he says, "for giving me a job to go to." He announced that EnterpriseDB would be giving $25,000 to PostgreSQL as part of ongoing funding earmarked strictly for feature development.
The next talk was by Ayush Parashar of Greenplum on the topic of database performance improvements in the PostgreSQLbased Bizgres database. Parashar discussed various algorithmic improvements to Bizgres' sort and copy functionality. Using a bitmap index instead of a Btree, he demonstrated, showed vast improvements in large database performance at low cardinality.
Parashar was asked when the improvements would be ported into the PostgreSQL tree. Another Greenplum employee answered, saying it would be after the code was further tested hardened.
PostgreSQL developer and conference organizer Josh Berkus noted that PostgreSQL 8.2 is going into feature freeze in just three weeks, and the Bizgres patches should be submitted as soon as possible to allow them to be integrated into the rest of the tree properly.
The third session was a bit difficult for me to keep up with, but from what I understood of it, seemed quite fascinating. In the course of onehour block, 10 speakers were given exactly five minutes each in what was termed a lightening talk. The first two were by employees of Voice over Internet Protocol (VoIP) specialist Skype.
Skype, says Hannu Krosing, runs on PostgreSQL internally. In order to scale to the massive size the company is working toward, it is working on a scalable database system which they're calling PL/Proxy.
According to Krosing, the project is soon to be open sourced. PL/Proxy works on the basic principle of splitting databases up by function, and then providing a simple way for these separate databases to be integrated.
The second part of the Skype lightning talk was about Skytools, by Skype's Akso Oja. These tools are queueing tools designed for hard drive failover and generic queueing.
The third lightening talk was by Hiroshi Saito, a member of the Japanese PostgreSQL Usergroup (JPUG) discussing an SNMP daemon, pgsnmpd, for PostgreSQL to allow operational situation surveillance for PostgreSQL databases.
The next in the series was about DBD::Pg, described by Greg Sabino Mullane as the integration of the best database and the best language PostgreSQL and Perl. The DBD::Pg module, Mullane says, makes do() loops very fast, using libpq, the PostgreSQL client library. He cited some other improvements, such as UTF8 (Unicode) support.
He says future releases of DBD::Pg would be developed on Subversion or svk, in an effort to move away from CVS. He hopes, he added, that PostgreSQL moves to Subversion. He says he would also like to add Windows, Perl6, parrot, and DBI v2 support for the module.
The fifth of the ten sessionlets was by someone calling himself only "M", discussing PGX, PostgreSQL client support for Mac OS X. He explained that it is not intended to be a PostgreSQL admin tool, but rather a simple front end tool for PostgreSQL databases.
PGX allows nonblocking execution, which means the user can continue working with the program while it's off querying the database. Asked if it is possible to cancel a query, he was very succinct in saying that that capability had not yet been written. PGX is written in objective C, and allows the simultaneously querying of multiple databases with the same queries.
The sixth session was by JeanPaul Argudo on the topic of SlonyI as a generic solution for aggregating data through multiple installations. Instead of replicating a master database a network of slaves, he explained. Users, he says, do not want to connect to each database separately. SlonyI uses a slave database to replicate a network of master databases.
The next in the series of brief discussions was on the topic of Red Had clustering by Devrin Gündüz of Command Prompt, in Turkey. Gündüz discussed PostgreSQL with Red Hat Cluster Suite. He described it as a redundant system for data, host, server, and power. According to Gündüz, there is no time for downtime. All it needs to work, he says, is hardware powerful enough to run Red Hat Enterprise Linux, and between two and eight servers with identical configurations.
The eighth sessionlet was by Neil Conway about TelegraphCQ, a Berkeley research project. The idea behind TelegraphCQ, he says, is to allow streamed queries. The queries, he says, are long lived, but the data is shortlived. Conway described an example of the use for such a system is for security monitoring sensory networks with action being taken based on the streamed query.
More information on this project can be found at telegraph.cs.berkeley.edu.
The ninth session was by Alvaro Herrera on the topic of Autovacuum maintenance windows. He explained that the system Is being based on cron, the task scheduler in most Unixbased systems. It allows maintenance windows to be specified so that database cleanup can be scheduled to be carried out by the database in offpeak hours for that database.
The final lightening session was presented by David Fetter about running a Relational Database Management System as an object within the database. He briefly discussed performance differences between objectbased and relational databases.
The lightening sessions concluded the morning session of the first day. In the afternoon, Gavin Sherry and Neil Conway presented a pair of one and a half hour long backtoback sessions called an Introduction to hacking PostgreSQL. After checking to see that nearly everyone in the room had at least a basic knowledge of the C programming language, they got into it.
You need to know C to hack PostgreSQL, Conway says. Fortunately, it's an easy language to learn.
PostgreSQL, he added, is a mature codebase and good code to help learn C from. Conway says Unix system programming knowledge is useful, but not necessary, depending on what part of PostgreSQL you want to hack on.
He gave a few technical pointers on debugging, such as ensuring that if there's a new bug in your code that you can't explain that you make clean and recompile from scratch to ensure everything is current.
He recommended ensuring that you have a good text editor, suggesting Emacs, to make your life easier.
He also recommended a number of tools to reduce the amount of development time wasted debugging, such as ccache, distcc, and Valgrind.
Conway and Sherry traded off for the rest of the presentation, providing an entertaining, easy to follow tutorial session. Among the things they warned about is avoiding idiosyncrasies in coding style that annoy people and waste time for no discernible positive gain.
Read the code around what you are patching or contributing and make your changes conform to the adjacent style.
When writing your patches, especially ones that add features, send the idea to the project first to make sure it is one that would be welcome. They cited an example of someone who wrote a 25 thousand line patch that had to be rejected.
When determining what patches to write, they suggest asking yourself a number of questions, for example:
Is this patch or feature useful?
Is it a patch for the PostgreSQL backend, or is it for the foundry or contrib/ directory?
Is it something that is already defined by the SQL standard?
Is it something anyone has suggested before? Check the mailing list archives and todo list.
Most ideas, they cautioned, are, in fact, bad. Also, they warned, make sure your submitted code is well commented, and tested properly.
The PostgreSQL conference will be having a code sprint following the main part of the conference. They recommended checking the code sprint wiki for ideas to cut your teeth on.
PostgreSQL doesn't like centralization
The last session of the first day was on the topic of fundraising, hosted by Berkus. The discussion started with an introduction to the Japanese PostgreSQL Users Group (JPUG) by Hiroki Kataoka.
In Japan, Kataoka says, PostgreSQL is more popular than rival database MySQL, owing largely to earlier Japanese language support in PostgreSQL. JPUG started with 32 members and eight directors on July 23rd, 1999, Kataoka says. It now boasts 2,982 members, 26 directors, and a Japaneselanguage mailing list with around 7,000 subscribers.
He showed a map of Japan broken down into its 48 provinces, showing which had JPUG regional chapters or which otherwise had a PostgreSQL presence. Nearly half the provinces of Japan have a JPUG regional chapter. JPUG offers a number of activities and incentives, including PostgreSQL seminars, summer camps, a regular newsletter, PostgreSQL stickers and PostgreSQL water bottles for distribution at JPUG events. The JPUG, which is a registered nonprofit in Japan, has numerous corporate sponsors.
JeanPaul Argudo introduced the French PostgreSQL organization: postgresqlfr.org of which he is treasurer. It was started in 2004, Argudo says. Its Web site is powered by Drupal and the group has a presence on irc.freenode.net in #postgresqlfr. It's a registered nonprofit under French law 1901. It has 50 members that pay €20 per year each. The Web site has some 2,000 users. The organization invites donations through its Web site but managed a mere €25 of Web donations in its first year.
The Web site, Argudo says, has around 1,400 pages of translated PostgreSQL documentation, information on migration, and translated news and information from the main PostgreSQL website. Work is in progress to produce books, he added.
Berkus introduced the rest of the world's organizations, noting that PostgreSQL currently deals with four nonprofit organizations for fundraising: JPUG in Japan, PostgreSQLfr in France, FFIS in Germany, and USbased Software in the Public Interest (SPI) for most of the rest of the world. PostgreSQL joined SPI after finding that creating their own 501(c)3 US nonprofit organization was a very difficult and expensive proposition.
PostgreSQL, he says, does not like centralization.
Following these introductions, Berkus led a discussion on the nitty gritty of PostgreSQL's internal political structure, especially as it related to dealing with the nonprofit organizations and organizing PostgreSQL's money.
Day two of the conference
The second day of the conference was far more intensely technical than the first, with a variety of talks by developers about their PostgreSQL subprojects such as pgpool, pgcluster, Tsearch2, and other topics.
During the morning session, Peter St. Onge of the Department of Economics at the University of Toronto gave a talk on the role of databases in scientific research. St. Onge says that PostgreSQL's flexibility, extensibility, and speed, make it ideal for the research environment.
He discussed the unique needs of databases in research environments. Each lab, he says, is different.
Most currently operate on a Linux, Apache, MySQL and PHP (LAMP) platform, but research labs are switching to what he termed a Linux, Apache, PostgreSQL, and PHP (LAPP) platform.
St. Onge says his goal is to put data handling logic into the database backend. From samples, to analysis, to mass spectroscopy, to analysis, to storage, to archiving, every step that a person has access to creates room for error. Every step that can be automated is an improvement.
A lot of data in different labs is stored in different units, he noted. Allowing basic functions within the database such as conversion of degrees Fahrenheit to degrees Celsius, for example, would allow better integration of data from multiple research facilities.
The 10th anniversary PostgreSQL conference went well, overall. Session timelimits were strictly enforced and technical problems were at a minimum, making for a smoothly run conference. All the talks were recorded, and most of them were recorded on video. Anyone interested in hearing any of the talks should be able to do so on the conference Web site in the next few weeks.
The material was largely highly technical, often way over my head, but the people were down to earth, and judging by the reactions of people around me, most understood and appreciated what was being says.
The conference operated on a budget of around $30,000, including travel stipends for many of the presenters.
Out of 90 people registered, Berkus says that only five failed to show. "Some due to specific issues (like health problems). Four "extra" people who had not registered due to some significant communications issues did show, so we were still slightly over capacity." A further 11 people were waitlisted and unable to attend as a result.
There is already discussion of a reprise of the conference. Says Berkus, "We're currently discussing the possibility of a conference next year, maybe even a 300attendee user conference. We're somewhat undecided about whether to do it next year or the year after though, and where it should be located. A survey will go up on the conference Web site sometime soon if you're interested in the next PostgreSQL conference, please watch for it (use the RSS feed) and fill it out."
As for the long term consequences of the conference, Berkus says he's "hoping that it will lead to better coordination and communication in our really farflung community. Having developers from so many different parts of the community facetoface, even once, should help us overcome some barriers of language, distance and time zones.
"We should see some accelerated code development soon with people sharing ideas. For example, I think the various replication/clustering teams learned a lot from each other. I also think that, having met people in person, there will be subtle changes in the way we regard each other back on the mailing lists. A bunch of people didn't look or sound like I expected. I'm not sure what those attitude changes will be, but I'll find out soon enough."
This year's conference was organized by four PostgreSQL volunteers: Berkus, Andrew Sullivan, Peter Eisentraut, and Gavin Sherry. Next time, says Berkus, they're hiring professional help to organize any conferences. "Now," he says, "I'm finally going to get some sleep."
Originally posted to Linux.com 2006-07-10; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 18:37 on
July 10, 2006
Wine, desktops, and standards at LinuxWorld Toronto
TORONTO The final day of the LinuxWorld Conference & Expo Toronto was a busy one. Novell Canada CTO Ross Chevalier delivered a keynote address on why this year is the year of corporate Linux desktop adoption as opposed to all those previous years that were the Free Standards Group executive director Jim Zemlin explained the importance of the Linux Standard Base, and developer Ulrich Czekalla gave an excellent presentation on the state of Wine.
Czekalla discussed the status of the Wine (Wine is Not an Emulator) project's Win32 API implementation for Linux, and gave his presentation using Microsoft PowerPoint running under Wine. Czekalla has been working with Wine since 1999, when his thenemployer Corel needed it for WordPerfect and CorelDraw support in Linux. Czekalla is now an independent contractor, but he says he spends a lot of time working with Codeweavers on Wine.
He expressed the importance of the Wine project, and cited a study by the Open Source Development Labs (OSDL) that says the number one concern of companies looking at migrating to Linux is that applications need to be able to run in the new environment. The applications, he says, were not Microsoft Office or other things for which open source substitutes exist, but things like Visual Basic programs and other applications developed inhouse, or niche applications, critical to the function of the business.
Wine started in 1993 initially to bring games to Linux. It is not an emulator, he stated, it is a free implementation of the Win32 API, intended to allow Windows executables to be run in the Linux environment. It is released under the GNU Lesser General Public License (LGPL). Czekalla says that Wine is developed by about 665 developers, with 30 to 40 active at any given time.
He explained the makeup of the layers between the operating system and the program being run in the form of a chart explaining at what level Wine runs on a Linux system. Wine runs in user space, he says, not kernel space, and therefore has no access to hardware or drivers. To the Linux machine, it is just an application. It runs between the kernel and the libraries needed to load the Windows executables, allowing Windows programs to find the Windows libraries they are expecting within Linux.
Theoretically, he says, applications in Wine should run just as fast as they do under Windows. As there is no emulation, there is nothing to slow the execution. However, Wine is still not considered a stable release, and has not yet been optimized, resulting in a lower performance. At the moment, Wine developers' efforts are focused on making it work, not on optimization. That, he says, will have to come later.
As work progresses on Wine, a lot of effort is put into making one particular application work at a time. Czekalla says that as problems with one application are fixed, many other applications will also become functional in Wine, as the features enabled for the targeted application are also needed by other applications. He cited the process of getting Microsoft Office 2003 to work under Wine as an example of this, calling the sideeffects for other programs "collateral damage."
Czekalla says that while Wine is included with most Linux distributions, it still often needs manual tweaking to make it work Ulrich Czekalla pauses to take a question with different programs and, in spite of being in development for 13 years, is still in its 0.9 version. Wine releases always have bugs, he cautioned, and one should be prepared for odd crashes. Supporting Wine, he noted, requires someone with both Windows and Linux expertise.
To debug Wine, he says, requires the use of relay logs that can be as large as a gigabyte to see what happened within the application as it progressed, in an effort to figure out what might have killed it. He says to expect it to cost about $1,000 per bug to fix minor bugs, and between $4,000 and $20,000 to fix more difficult bugs. A Wine implementation can look great but become problematic and expensive.
Where is Wine going? There are some messy areas, says Czekalla. One of these is its Component Object Model (COM) implementation. After years of work, it is still not done and is at least six months away from being able to talk to a Microsoft Exchange server.
Right now about 70% of programs can install with Wine's implementation of the Microsoft Installer (msi.dll) library. In another year, Czekalla expects that number to be about 90%, though he pointed out that while most programs that install will work after being installed, there is no guarantee. One problem Wine has had is with the deviceindependent bitmap (DIB) engine. At the moment it supports only 24bit color depth graphics, but many Windows programs still use only 16 bits. He described this as a problem with X.
Because Wine is a userspace program, and cannot see hardware, things such as USB devices (like a USB key) cannot be directly seen by programs running under Wine. An effect of this, he says, is that programs requiring Digital Rights Management (DRM) will not work under Wine and won't unless and until Linux gets native DRM support. Companies that use DRM, he said, are not willing to help. He gave the latest version of Photoshop as one example of a program that has implemented DRM and will not work in Wine.
Before taking questions, Czekalla pointed out that a lot of work could be saved by using Microsoft's own libraries (DLLs), but the problem is licensing. You need an appropriate Windows license for any libraries you borrow, though it was pointed out by an audience member that most people already have unused Windows licenses they can use from hardware they've bought that included Windows. Czekalla says that Internet Explorer is the Windows application used most often in Wine, because it's needed by many people for specialized functionality relating to their jobs that will only work in IE.
The first person to ask a question noted that Wine sounds like a pain in the neck, so why should you use it? It is a pain, agreed Czekalla, but a lot of applications do work. For migration, the choices are basically Wine or VMware. Wine, he noted, doesn't require a Windows license or the performance hit of emulation, while VMware does.
Another attendee asked, who is using Wine? Czekalla cited Intel, Dreamworks, Disney, and basically any company that is migrating from Windows. Wine is unstable, he says, but when rolling your own implementation it can be quite stable for your purposes.
Is Novell involved in Wine? Czekalla says he knows some people at Novell who are contributing, but as a corporation he doesn't believe so.
One person asked about Wine's relationship with TransGaming. Czekalla says that not much is happening there. A few years ago, TransGaming modified and sold Wine's code perfectly within the Wine license, which was then the X11 license which doesn't require redistribution of derivative code. But in response, Wine switched its license to the LGPL, which he surmised TransGaming didn't like as the project hasn't heard much since. Czekalla expressed the hope that TransGaming and Wine can eventually get back to working together, as there is a lot of duplication between the two.
Linux for the corporate desktop
Novell Canada CTO Ross Chevalier's keynote on the third day of LWCE Toronto was "2006 The Year of Linux on the Corporate Desktop." He started by acknowledging that his keynote's title was a cliché, and asked why 2006? Why not? It's always the year of something, and he believes Linux really is ready to hit the corporate desktop.
He pointed to a Novell project called Better Desktop that focuses on desktop usability for Linux. The idea behind it, he says, is to develop the Linux desktop interactively with actual users of Linux desktops in workplaces rather than test environments to figure out what it is they need. The goal is to help companies transition to Linux desktops without any serious retraining costs. People want things to work as they're used to them working, and the project is working toward that.
Eye candy, he says, is important.
It holds people's attention and helps them learn. He returned to that theme later with demonstrations of desktop Linux eye candy, such as a three dimensional cube rendering of multiple desktops in X, allowing the cube to be spun on screen and windows to be dragged between and across desktops.
In order for Linux to be widely adopted, he says, Linux desktop quality and hardware support must be better than the asyetunreleased Microsoft Vista and Mac OS X 10.5 operating systems.
Ross Chevalier tosses a SUSE gecko to someone who To that end, USB and FireWire answered a question devices, printers, and the like must just work when attached, as users expect from Windows and Mac OS, or adoption slows.
Searching desktop computers, he says, has to be easy. People with large hard drives and large numbers of files need to be able to find stuff easily. He pointed out that the number one and number two Web sites according to Alexa's ratings are Yahoo and Google, respectively. Search, he says, is important. Password protected office files have to be as easy to use in OpenOffice.org as they are in Microsoft Office, he says, or adoption doesn't happen.
After listing the requirements needed for adoption, he started a demonstration on his SUSE Linux laptop to show that all the capabilities he listed as being required do indeed exist. He concluded that we are up to the point where a Windows user can go to Linux and feel comfortable.
The use of the Linux Standard Base
Jim Zemlin of the Free Standards Group headlined a session entitled "Open Source and Freedom: Why Open Standards are Crucial to Protecting your Linux Investment." The Free Standards Group is a Californiabased nonprofit organization, Zemlin says, with a broad range of members including "basically everyone but Microsoft." Membership spans the globe, he says, with many members in many countries around the world.
The Free Standard Group's main focus is the Linux Standard Base (LSB), which is an ISO standard. The goal of LSB is, according to Zemlin, to prevent Linux fragmentation. A common misconception about open source, he says, is that using open source software prevents the problem of vendor lockin. He cautioned that this is not true. Open source is a development methodology and choice is not guaranteed.
Zemlin pointed out somewhat ironically that many companies insist on having one throat to choke while complaining about vendor lockin. The single throat you are choking, he noted, is the vendor that has locked you in. With open standards he says, you get the choice of throats to choke.
The Linux Standard Base offers a standard for the installation, libraries, configuration, file placement, and system commands, among others. Zemlin says this provides independent software vendors (ISV) an easier way to develop Linux software, as they know where to find everything and can expect all Linux systems to have the same basic structure. Developing around the Linux Standard Base means that ISVs need only test their product against one distribution, and not have to set up several test cases, saving time and effort for the quality assurance side of development.
The FSG being controlled by open source vendors means that its release cycle for an updated standard base is about 18 months, comparable to most distributions' own upgrade cycle. As a result of this constant updating, the ISO standard must also be updated regularly. The standard must move with the everfluid open source community it is attempting to standardize. All the major commercial Linux distributions, Zemlin says, are LSBcompliant.
This year's LinuxWorld Conference & Expo Toronto saw a far improved set of speakers from last year, with a noticeable increase in the useful speakers to marketing droid ratio. Conference organizers state that the conference saw more preregistrations this year than in previous years, but final numbers on actual attendance have yet to come out.
Originally posted to Linux.com 2006-04-27; reposted here 2006-04-27.
words - whole entry and permanent link. Posted at 18:22 on
April 27, 2006
Wikis, gateways, and Garbee at LinuxWorld Toronto
TORONTO Yesterday's second day of the LinuxWorld Conference & Expo in Toronto saw the opening of the exhibit floor, two keynotes, and a variety of interesting but not entirely topical sessions.
I started the day off by attending a session by Peter Thoeny of TWiki, who led a discussion of what wikis are and why they are useful in the workplace. Thoeny said several different wiki implementations are available, including some in black boxes that can be plugged into a network ready to run. As far as common wikis, the best known is MediaWiki, the engine for the popular Internet resource Wikipedia, an encyclopedia that can be maintained by anyone on the Internet who cares to contribute. At present, Thoeny says, it has approximately 1 million English entries and about 100,000 registered users. He opened Wikipedia and entered a minor correction to an article without logging in to demonstrate the capability.
Thoeny posed the rhetorical question: with anyone and everyone being able to edit Wikipedia, won't it descend into utter chaos? Thoeny says no that on Wikipedia, experts in most domains keep their eyes on articles relating to their fields. If something incorrect is entered, it is usually fixed within minutes. Pages that are esoteric or that cover topics of narrow interest are not watched as closely, and errors can survive longer on those pages before being corrected. Wikipedia also polices for copyrighted material to ensure that there are no violations.
Content on Wikipedia is released under the GNU Free Documentation License (GFDL), which allows the free redistribution of the content of the site with the condition that a notice of the license be included with it, very much like the GNU General Public License's (GPL) permissions and requirements related to code distribution.
Asked if it is possible to restrict access to a wiki, Thoeny says that it is designed to be world writable, but in a corporate environment, for example, it is relatively easy to lock a wiki down as needed.
Graffiti and spam on Wikipedia are also usually resolved quickly, Thoeny says, as they're easily identifiable, and revision control allows a previous version to be recovered and reposted relatively simply. On public wikis, spam and graffiti are not uncommon, but the problem, he noted, is nonexistent on internal corporate wikis, as they are not viewable by anyone.
Thoeny also described some of the many features of wikis, such as WikiWords, which automatically link to other articles by the name of the word, and the ability for most wikis to accept plugins to include special characters or features.
Thoeny's own background is as the lead developer of TWiki, a corporate environment oriented wiki developed by five core developers, 20 more developers with write access, and around 100 more contributors.
The various different wiki systems have no standard, though a conference to discuss the topic will take place in Denmark this August. About the only specific feature all wikis have, aside from being reader editable, is that in text entry a blank line will create a new paragraph. As far as how to create special text or make text bold, each wiki server has its own way of doing things, Thoeny says.
He demonstrated TWiki's capabilities extensively, noting that the program is downloaded around 350 to 400 times per day.
A geneographical keynote
Nearly halfway through the threeday conference, the show's opening keynote was delivered by IBM Computational Biology Centre researcher Dr. Ajay Royyuru. While interesting, his keynote had little to do with Linux or open source.
In brief, Royyuru discussed a joint project between National Geographic and IBM to use aboriginal DNA from all over the world to try and trace premodern migratory patterns for humans. The data analysis is performed on IBMprovided Linux systems, according to one of the slides that appeared briefly on the screens at the front of the hall, providing a tenuous link to the topic of the conference.
Multihoming redundant network backup
Burhan Syed of Shaw Cable's Big Pipe subsidiary gave a late morning talk on multihomed network routing solutions around Border Gateway Protocol (BGP) and Linux.
Syed discussed the fact that many companies want network connections that won't die. In the case of one ebusinessbased customer, connection downtime costs around $37,000 per hour in lost business. He touched on the many solutions companies have come up with to solve the problem. The cheapest and most common, Syed says, is for a company to buy two connections to the Internet. One of them is the primary connection on which the servers run, and the other sits disconnected on the floor waiting for an outage on the first. When then outage comes, the tech guy will come, hopefully relatively quickly, and unplug the downed connection, plug in the backup connection, and reconfigure the network to use the freshly connected alternate.
For a company running a Web site that is central to its business, this has some serious drawbacks. Aside from having to reconfigure its own network to use the different Internet service provider, the company's Domain Name Service table needs to repropagate to make the Web server accessible at its new location.
DNS is what allows browsers or other services to interpret an IP address into a number that can be tracked down to a server. IP addresses are assigned by service providers, and when you're using multiple service providers in this way, the address changes, and anyone trying to connect will need the current information to succeed.
After discussing other better but still similar solutions, Syed explained how to do redundant Internet connections correctly.
You need a block of IP addresses assigned to your company by the American Registry for Internet Numbers (ARIN), BGPaware routers, and an Asynchronous System Number (ASN), also assigned by ARIN. All of this adds up to a lot of money, and IP addresses and ASNs are limited. In the case of ASNs, only around 65,000 are available for the entire world.
BGP works, he explained, as a networktonetwork routing protocol. A BGP router will exchange routing tables with other BGP routers to determine the most direct way between any two networks on the Internet. It won't go for the fastest or cheapest route, necessarily, just the most direct one, and it ignores the number of actual hops to get through a network and out the other side, only being interested in the actual network.
To apply this to your business, you need an IP address block with at least 254 addresses (a /24) otherwise Syed says that other BGP networks will ignore you for being too small. You then connect to two or more ISPs using your own IP addresses, which is made possible by your ASN. A separate BGP router is used to connect to each of the ISPs, and the routers also communicate directly with each other, so they know when a connection has gone down and can trade BGP routing tables.
This is where Linux comes in, Syed explained, as hardware routers from the likes of Cisco can run into the thousands of dollars, and lack enough memory to hold a large BGP routing table, which can run to around 128MB for 136,000 routing entries, while many routers come with only 64MB of RAM to work with. You can save a lot of money by using Linux systems running a routing program called Zebra, which can be administered using Cisco commands, on top of a Linux system.
Syed warned that ISPs should not be charging for access to BGP through their service, and to be wary of ones that do. Syed's own ISP offers the service and has an internal BGPbased system that uses ISP assigned ASNs internally for a less expensive solution.
Having never been to one before, I dropped by IBM's sponsored Media Lunch. With a small handful of other members of the media, I ate a couple of sandwiches that made airplane food taste like gourmet fare and sat back to listen to what IBM's presenters had to say.
The first presenter was Dr. Ajay Royyuru of keynote fame, who gave an abbreviated 10minute version of his keynote, followed by representatives from the Bank of Canada, the University of Toronto, and iStockphoto.com, all giving similar presentations about their success built on top of Linuxbased IBM systems.
Reaping the benefits of open source Following IBM's thinly disguised Bdale Garbee speaks about open source at HP marketing event, I went upstairs for the day's second keynote speech, delivered by Hewlett Packard's Linux Chief Technologist and former Debian Project Leader Bdale Garbee.
Garbee's keynote was interesting but poorly attended, with fewer than 100 people in the audience.
Garbee described HP's role as he saw it in the Linux and open source community as market stewardship. It's HP's job, he said, to help companies deploy Linux systems.
He noted that HP is the only major Bdale Garbee speaks about open source at HP Linuxsupporting company not to write its own open source license. Instead, Garbee said, it's HP's policy to look through the GPL, BSD, Artistic, and other licenses to understand what they are and how they work and for HP to work within that framework.
As to why companies should use Linux, one reason Bdale gave is that companies can download and try it and its associated software off the Net before committing to a large scale rollout, while many commercial programs require simply purchasing and rolling out their software. Open source allows the avoidance of vendor lockin, he added.
Garbee addressed the issue of whether Linux and open source is less secure because malicious people can read code and find vulnerabilities that have not been fixed. His answer was that it works both ways.
More eyes are indeed on the code, but many of them are good, and more security vulnerabilities are found and fixed than might otherwise be the case.
For commercial adoption of Linux, Garbee said we're at the point where nearly all companies use Linux in at least some capacity, whether it be for a simple DHCP, DNS, or Web server or for the entire company's database systems. Even if the CIO of a company is not aware of Linux being present in the company, it is generally there if you ask the tech guys on the ground.
Open source middleware is at an early stage of adoption, he says, and open source is beginning to be at the leading edge of full specialized applications for the corporate environment.
In the server market, Linux and Windows are both gaining ground from Unix, but Linux is growing faster than Windows, Garbee says, though Linux has some of what he termed "pain points," such as the perennial complaint about support accountability. In a traditional IT department, there is a need to identify the party that will take ownership for a particular problem, and that party is usually a vendor who will resolve the problem. The question may be harder to answer for Linux and open source.
At HP, Garbee says, Linux is not a hobby, it's a business strategy that is paying off. He described Linux and HP's relationship as symbiotic, and listed statistics relating to HP's involvement in the community. HP employs around 6,500 people in the OSSrelated service sector, 2,500 people as developers for open source software, has some 200 open source software based products, and has instigated at least 60 open source projects, as well as releasing numerous printer drivers as open source.
Garbee spent an extended period of time following his presentation taking questions from the floor. Among the questions asked was about HP's Linux support in its consumer desktop and laptop computers. Garbee described that side of the business as not high margin and indicated he was working on trying to get better Linux support from within the company for those products. Businessoriented machines, though, he says, including workstations and laptops, are generally Linuxcompatible, as he says HP uses better hardware.
The tradeshow floor
Following Garbee's keynote, I took a brief sweep of the tradeshow floor to see what was cooking, but the floor was not particularly busy. I took a few pictures of the proceedings and carried on, satisfied that I wasn't missing much.
Originally posted to Linux.com 2006-04-26; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 18:14 on
April 26, 2006
Security and certification at LinuxWorld Toronto
The first day of this year's LinuxWorld Conference & Expo Toronto started off with its traditional two threehour long tutorials. From the provided list, I selected "The Open Source Security Tool Arena," presented by Tony Howlett for the morning, and DeeAnn LeBlanc's "Hit the Ground Running: Red Hat Certifications Preparatory" in the afternoon.
Tony Howlett, president of Spring, Texasbased Network Security Services and author of a book on the topic, gave the opening seminar to a crowd of around 30 people. He described his session as the Reader's Digest version of his book, and started out his session with a warning not to use tools and methods he described on systems without getting written permission to do so. He warned that with verbal permission only, when something goes wrong, the person who gave that permission can deny it and you can be reprimanded or lose your job for trying these security tools in a way that brings down a productivity system.
Howlett's presentation and book are based on Mandriva 10.1, though most of his presentation actually took place in Windows XP. He had two laptops set up with a KVM switch attached to the overhead projector.
Howlett noted as he started into his presentation that while open source security tools exist for Windows, they are not as common as their Linux counterparts, but their numbers are increasing. Within Linux, he says, just about every domain of security is covered by good open source tools.
Many security vendors, Howlett said, use open source tools as the base of their products. As well, many open source tools compare favorably to their proprietary and expensive counterparts; Howlett cited Snort as one example of this.
Howlett addressed the ageold question, "Is open source or proprietary software more secure?" His answer was neutral, suggesting that it is a difference in philosophy, not a difference in security. In the case of Windows, he noted, security was not the priority until recently. Microsoft's priority had been to get releases out the door, and it has only recently begun making security a serious priority, and now Windows is catching up to Linux for security.
What advantages does using open source security tools offer? Howlett says one key one is cost reduction.
Should you use a $15,000 piece of software when a free one will do the same thing? "Be the budget hero," he advised, and get yourself noticed and promoted in your company by recommending open source solutions. He suggested that saving a company large sums of money by implementing free versions instead of proprietary ones can be the difference between keeping and losing a job when layoffs come around, too.
He also advised users to use more than one tool. No single tool can do everything, and some have weaknesses that others don't.
He suggested that the best way to get into network security as a career is to get into it. Find the tools and get into the development through Sourceforge.net and freshmeat, Howlett recommended. Read the code and learn how the tools work. Offer help on the appropriate mailing lists lists, do beta testing, and get your name out. Having good security knowledge is good for your résumé, he advised.
On the topic of mailing lists, he noted that they're much more efficient than the commercial approach of putting you on hold for two hours and transferring you overseas to a tech support person elsewhere. While spending that time on the phone, Howlett commented, your email reply will come back and your problem will be resolved.
When using security tools, Howlett says, make sure you're building security onto a secure operating system. Building on top of an insecure operating system is like building on sand. The best way to build a secure system, he says, is to start from a fresh system and build security tools into it as you add your services. He recommended a suite of scripts designed to help lock down your system called Bastille Linux. Bastille, which requires Perl and some associated tools, is supported on Debian, Red Hat, Mandriva, Mac OS X, HPUX, and Solaris, Howlett says. It works by interactively asking what a system will be used for and applying security to the system to prevent other uses of it.
Firewalls started as open source software, Howlett says, suggesting that they are the most basic level of network security. In Linux, iptables has been the standard firewall since the release of kernel 2.4, following the 2.2 kernel's ipchains and the 2.0 kernel's ipfw. Iptables can have rules set as needed or within a startup script, he says, and added that a lot of commercial firewalls use iptables under the hood, citing Watchguard Firebox as an example.
Turtle Firewall, he says, is iptables with a Web interface using Webmin. Turtle Firewall allows you to see all your firewall rules, he says, adding that this tool with iptables is as good as or better than a Linksys hardware firewall.
He also suggested Smoothwall Express as a useful open source firewall. Smoothwall, he explained, is a turnkey dedicated firewall system. A system running Smoothwall should not be expected or intended to run anything else. Smoothwall is available commercially or as a free download (as Smoothwall Express).
Smoothwall is a strippeddown Linux system with only the essentials for the firewall remaining.
Port scanners were the next security tool he discussed. They are important to both security professionals and those who would seek to harm systems, he noted. For the purpose, he says, Nmap, from Insecure.org, is the best tool "bar none." He described Nmap as a lightweight port scanner that can run on most Unixes, Linux, Windows, and from either the command line or within a graphical environment.
Howlett said that Nmap is useful for ping sweeps, where the program pings an entire range of IP addresses on a network to see what is alive; OS identification; and determining what services are active on a system. He recommended scanning your own systems other people won't hesitate to do so, he said, so you might as well see what they're going to see.
In the case of OS identification, he described TCP fingerprinting as a way to determine the operating system of a networked system by the way it assembles its packets. He described the process of TCP fingerprinting as 80% accurate, and noted that if you query something for its operating system and find that Nmap does not know what it is, you can add that fingerprint to the existing database for others to be able to identify it in the future. He noted that determining the operating system can also be useful for a "bad guy."
He explained that port scanning is useful for finding running services on a computer that are not meant to be active. It can also be used to track down and identify running spyware and legacy services that waste resources and can lead to denial of service attacks. One example of this is chargen, a service that runs on port 19 and merely generates random characters when queried.
From port scanners Howlett moved on to the next logical step, which is vulnerability scanners. Howlett described vulnerability scanners as port scanners that take it a step further, and cited Nessus as his favorite. When a vulnerability scanner finds an open port it checks to see what software is running on that port, and compares its results against a database of security vulnerabilities. If an option is set to do so, it will attempt to exploit anything it finds in an effort to determine whether the system has any vulnerabilities.
Howlett noted that the company behind Nessus was no longer developing the current version as open source. This has resulted in a number of forks, the most popular of which is OpenVAS. Nessus is still free, but has a limited license. It's still good, but not open source, Howlett said.
After extensive discussion of Nessus, OpenVAS, and a related application developed by Howlett and others called NCC, Howlett went on to the topic of intrusion detection and intrusion prevention systems. In brief, intrusion detection systems are complex packet sniffers that also do other tasks, identifying any vulnerability exploited. Intrusion prevention systems, on the other hand, take proactive measures when potential attacks are discovered, but with the downside that false positives can cause problems for a network or system.
Howlett warned about the hazards of wireless networks, and how a lot of security compromises are made possible by careless use of wireless. Using a tool called Kismet, he showed that 37 unsecured wireless access points were visible from our conference session room, not all of which were the ones providing the conference's wireless connection.
Using a variety of tools, he demonstrated that information going over a wireless network including hashed passwords are readily obtainable and can be used nefariously.
Howlett covered so much material in the morning session that it would be impossible to describe it all in one article, and he continued in the afternoon with another threehour session covering still more. All in all, this was a highly useful session for anyone interested in securing their computer systems and networks, as everyone should be.
Red Hat certification
In the afternoon, I attended DeeAnn LeBlanc's session covering what one needs to do to be able to pass the Red Hat Certified Technician and Engineer exams.
Addressing the question of whether a certification is actually important, she acknowledged that it depends on what you want it for. Years of experience administering Linux systems will be at least as valuable as a certification, but some companies may insist that their employees be certified whether or not it's relevant.
To prepare for an RHCT or RHCE exam, LeBlanc said you need to start by reading Red Hat's published exam goals, and get lots of practice administering Red Hat systems. On the test, there will be no Internet access, but all documentation included in a standard install, namely man pages and the information found in /usr/share/doc/, will be available.
Before considering taking an exam, you should know how to create, modify, and view files, have a basic familiarity with console text editors, and at least one console Web browser in the event that your graphics system is down you need an alternative that you are familiar with, she explained.
Knowing awk, Sed, grep, and a text editor like vi is important, as is knowing how to redirect input and output on the command line. A basic understanding of TCP/IP networking, file compression and archiving tools (namely tar and gzip), email clients, and the "switch user" (su) command are also important to know and understand. Finally, you need to be able to find your way around a console FTP client.
The RHCT and RHCE exams are divided into two parts, she says: troubleshooting and maintenance, and installation and configuration. Red Hat used to include a written section in the tests, but abolished it after finding no significant difference in the results with or without the section.
Under the first category for the RHCT test, which is the basic technician's exam, you must be able to perform such tasks as booting into a specific runlevel such as single user mode, diagnose network problems, configure X and a desktop environment, create filesystems, and know your way around the command line.
For installation and configuration, she explained, you must know how to install a Red Hat system over a network, partition a hard drive, configure printing in the graphical and console environment, use cron and at to schedule execution of programs, and know how to look up commands you don't already know, for starters.
A candidate must also know how to set up and run Lightweight Directory Access Protocol (LDAP) and Network Information Service (NIS), use the automounter for filesystems that don't need to be mounted at all times, manage user and group quotas, alter file permissions and ownership for collaborative projects, install RPM packages manually, update a kernel RPM properly that is to say without deleting the previous version alter a boot loader configuration, set up software RAID during or after installation, and set or modify kernel parameters in /proc or using sysctl.
The Red Hat Certified Engineer exam requires all those skills, she said, and then some. She suggested that you set up a disposable system to practice for an exam and to learn the skills needed to pass it, and repeatedly break and then fix it. For example, deliberately put a typo in a boot loader configuration, she suggested, and see the results. Explore and understand.
You cannot cram for an RHCT or RHCE test, she said, you must simply practice.
Originally posted to Linux.com 2006-04-25; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 18:08 on
April 25, 2006
Ottawa Linux Symposium, Day 4
The final day of the Ottawa Linux symposium was highlighted by this year's keynote address, delivered by Red Hat's lead Linux kernel developer, Dave Jones.
The day's regular events got going at noon with a session by Sébastien Decugis and Tony Reix of European company Bull entitled "NPTL stabilisation project".
Cleaning up loose threads
Their presentation discussed a 14month long project at their company to thoroughly test the relative young Native POSIX Thread Library (NPTL). Over the course of their tests, they ran over 2000 individual conformance tests one at a time, each in a twostage process.
The first stage of each test was consulting the POSIX standard assertions, comparing it to the operational status of the library's routines, and coming up with a test.
The second stage of each test was to write test code for the case, run it, and evaluate the results. The results logfiles could be up to 300 kilobytes and initially took two days to read.
To help, they used a project called TSLogParser to cut down the process to just 15 minutes. The results table made it easier to understand what worked and what failed in great detail without trying to understand lengthly and detailed log files.
To date they have found 22 bugs in glibc, 1 in the kernel, and 6 in the POSIX standard itself where specifications are unclear or contradictory.
No room at the Innternet
After the discussion of NPTL, which was far more in depth than related here, I went on to a talk by Hideaki Yoshifuji of Keio University and the USAGI project entitled "Linux is now IPv6 ready".
In the middle of the 1980s, Yoshifuji told us, discussing the history of IPv6, the IPv4 address space was already filling. The Internet Engineering Task Force, seeking to head off the problem of IPv4's limited slightly over four billion possible addresses address space and created the concept of IPNG, or Internet Protocol: Next Generation, to become known as IPv6.
IPv6 came to life with the introduction of the specifications found in RFC 1883 in December of 1995.
IPv6 introduced a couple of significant changes over IPv4. For one, IPv6 is 128 bits giving it a theoretical address space of 340,282,366,920,938,463,463,374,607,431,768,211,45 6 (approximately 3.402*10^38, or 5.6*10^14 moles for chemistry folks) possible IP addresses, which should have less risk of running out than the current 32 bits which provide fewer addresses than there are people on the planet.
The new Internet Protocol also implements the IPSec security feature, a simpler routing architecture, and mobility, allowing IP addresses not to be locked to particular routes.
IPv6 was first introduced into the Linux kernel in the 2.1 development tree in 1996. At the time, neither mobile IPv6 nor IPSec was included.
In 2000, the Universal Playground for IPv6 (USAGI) project was born with the intent of being the Linux IPv6 development project. The USAGI project implemented IPSec into Linux' IPv6 implementation first based on the FreeS/WAN project.
In September 2003, USAGI began testing Linux' IPv6 support against the ipv6ready.org certification program's basic tests. In February, 2005, the advanced tests were completed and Linux kernel 2.6.11rc2 was certified for IPv6. More recent versions of the kernel have yet to be tested but are expected to pass.
Yoshifuji's slides from his presentation will be posted on line shortly.
Jones on bug reporters
This year's keynote address was delivered by Red Hat kernel maintainer and kernel AGP driver maintainer Dave Jones, with an introduction by last year's keynote speaker Andrew Morton as per a longstanding OLS tradition.
Jones' keynote was entitled "The need for better bug reporting, testing, and tools".
Morton began his lighthearted introduction by commenting that Red Hat has a lot of world class engineers.. and they also employ Dave Jones.
Jones, he said, joined the kernel development team in 1996 and currently maintains the AGP drivers in the kernel source tree. He's also the main Red Hat kernel guy, and with Red Hat's Linux market share being around or upwards of 50%, that makes Dave Jones an important player in the kernel community.
Morton commented that when the kernel 2.5 development tree was started, Jones volunteered to forward a number of bugfix patches from kernel 2.4, but when he was ready to merge them in, they no longer fit.
By the time he got them ready, the kernel 2.5 source tree had changed so much that the patches no longer lined up with the kernel source code.
Morton noted that a major crises had recently been averted. Through their leadership skills though, Red Hat's management had pulled Dave through email's "darkest hour". Morton then read a post on the Linux Kernel Mailing List where someone had threatened to force Jones to be fired from Red Hat.
Jones began his talk by saying that when he accepted to give the keynote at the last OLS (where he was asked in front of all the attendees), he did not really know what he was going to talk about, and, he said, he did not figure it out until he moved to Westford, Massachusetts. He then put a photo up on the projector of a glacier with a distant person visible in it, with the heading "Westford, Massachusetts", where he began working with Red Hat's bugzilla bug tracking system.
Jones then cited his favourite quote ever, he said, posting:
"I don't think you have a future in the computing industry." My CS teacher 15 years ago.
Jones noted that from kernels 2.6.0 through 2.6.7, new versions were being released approximately once a month, but since 2.6.7, new kernel versions are only being released about once every three months. The slower development cycle needs to get faster, he said.
From here Jones got into the meat of his address. Kernel upgrades, he noted, frequently break some drivers due to insufficient testing, citing the alsa sound driver's breakage in kernel 2.6.11 as a major example. He noted that it can be difficult to test every condition of every driver with every release. The AGP driver alone, he said, supports 50 different chipsets, so a small change can cause problems which may not be found until the kernel is released.
Some patches, he confessed, are getting into the kernel with insufficient review as various sectional maintainers don't take the time to read over all of every patch.
What testing is done at the moment is stress, fuzz, and performance testing, leaving regression, error path, and code coverage testing not done with every release.
Many bugs are not found in the release candidate stage of kernel development as most users don't test the prerelease kernels, instead waiting for ostensibly stable kernels and then filing bug reports for things that could have been caught earlier if more people used the test releases.
Jones went into a long discussion on bug reporting and managing, concentrating on Bugzilla, which Red Hat uses for its bug tracking. He said that Bugzilla is not the beall and end all of bug tracking, but that it is the best we have.
One day, when he was tired of looking at kernel code, Jones grepped the entire GNOME source tree for common bugs and found about 50 of them. He wrote patches for them all and went to see about submitting them. GNOME wanted each bug to be put into bugzilla as a new bug, with a patch provided.
Not wanting to spend that much time on it, someone else went through the exercise for him and got the patches in.
It is important to understand bug submitter psychology, noted Jones. Everyone who submits a bug believes their bug is the most serious bug submitted. If someone has a bug that causes one serious problem, and someone else has a bug that causes a different serious problem, these are both serious problems to them, but for the bugged software, the bugs have to be prioritised. Userviewable priorities in bug systems don't help, as everyone sets the highest priority on their bug and the purpose of the system is negated. Jones suggested that in most bug systems, the priority is simply ignored.
Some users lie in their bug reports, said Jones, editing their output to hide kernel module loading messages that warn of the kernel being tainted. Without the information, the bugs can be a lot more difficult to solve.
Many users refuse to file bugs upstream of their distributions, blaming bugs on the distributions instead of the kernel. Some users even change distributions to avoid bugs, and soon find the bug appears in their new distributions as well when they upgrade the kernels.
Other bug reporters he described as hitandrun bug reports. The reporters do not answer questions about the bugs that would help solve it, and the bugs eventually get closed as having insufficient information. Once the bug is closed, the original reporter will often reopen it and be irate that it was closed instead of solved, in spite of their lack of cooperation in getting the information together to help solve it.
Some people submitting bugs include massive amounts of totally irrelevant information, sometimes including the store where they bought their computer or how much they paid for it. Some include thousands of lines of useless log output with only a short, unhelpful description of the problem they are reporting.
A particularly annoying breed of bug reporters is the type that will submit a bug report against an obsolete version of the kernel and refuse to upgrade to a version that has fixed the issue.
The last type of difficult bug reporter Jones described is what he called the "fiddler". These people start adjusting various things trying to get their system to work around the bug they are reporting, to no avail.
When a new version of the kernel is released with the bug fixed, it still does not work for them because of all the other changes they made trying to get it to work, though it may start randomly working again with a later upgrade.
Jones said he hopes that future versions of Bugzilla are capable of talking to eachother, allowing different Bugzilla deployments to exchange bugs up or downstream as appropriate.
Many bugs, he said, are submitted to distributors but are never passed upstream to the kernel team to actually be addressed, while other bug reports exist in several different places at once.
The last thing he had to say about Bugzilla and its implementation at Red Hat is a pattern observation he has made.
Use of binaryonly kernel modules has dropped off significantly, from several bug reports relating to them per week to only a few per month. However, use of binaryonly helpers driver wrappers that allow Windows drivers to run hardware under Linux is up.
Jones commented that times have changed since he first joined the kernel at version 2.0.30. The kernel is much more complicated than it used to be to learn. It used to be possible to get up to speed on kernel development fairly quickly, while it now can take a long time to learn the ropes and get used to the kernel.
He went onto discuss valgrind, gcc DFORTIFY_SOURCE, and other approaches to finding and disposing of bugs in the kernel before moving on to a question and answer session with the packed room.
Among the questions asked was whether a distributed computing model could be used to help find and solve bugs, in the same way SETI@Home works. Jones did not think this would be a practical solution, noting that were bugs to be found, there was a good chance it would take down the host system and not actually get as far as reporting the bug back to the coordinating server.
If you ever meet Dave Jones, be sure to ask him about monkeys and spaceships.
Following Jones' keynote address, a series of announcements were made by an OLS organiser thanking the corporate sponsors for their continued support and thanking attendees for not getting arrested this year, among other things.
The final announcement was the selection of next year's keynote speaker: the energetic Greg KroahHartman.
Why some run Linux
I have to take exception to a comment Andy Oram made about the first day of this year's OLS in an article on onlamp.com, where he commented that "some attendees see Linux as something to run for its own intrinsic value, rather than as a platform for useful applications that can actually help people accomplish something" in response to some derogatory comments about OpenOffice.org's memory usage. The Ottawa Linux Symposium is a conference of kernelspace, not userspace developers who do absolutely only see Linux for its intrinsic value in many cases. It is precisely because of this microfocused engineering perspective that Linux is as good as it is. If you are looking for a conference where the attendees are looking for practical uses of software for general users outside of the operating system itself's development, the Desktop Summit held here in Ottawa, or any of the many Linux conferences around the world, is likely to be a better option.
In the end, OLS is all about sharing knowledge. Senior kernel developers walk around, indistinct from those who've submitted one small patch. There are no groupies, no gawking at the community figures walking around... it's just a conference of a group of developers and interested parties, each one of them both knowing something that they can share, and intending to learn something they did not already know.
It is what a conference should be.
I'd like to congratulate the organisers on seven years of a wellorganised, wellsized conference which has a schedule appropriate to the people attending (no conference like this would ever dare start its sessions at 8:30 in the morning!) and I look forward to returning in future years.
Of the 96 sessions and formalised events scheduled for this year's Linux symposium, I took 42 pages of handwritten notes from attending 23 sessions, and of those, I covered 15 in these summaries. I hope you enjoyed the small sample of this conference I was able to offer.
Originally posted to Linux.com 2005-07-25; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 17:47 on
July 25, 2005
Ottawa Linux Symposium, Day 3
The third of four days of this year's Ottawa Linux Symposium started before I did in the morning but the remainder of the day offered a great deal of interesting information on Linux virtualisation, women in the community, and an update on the state of Canadian copyright law.
Xen 3.0 and the Art of Virtualisation
Ian Pratt of the University of Cambridge described features both in the upcoming version 3.0 release of the Xen virtualisation system, and of virtualisation more generally. Xen's current stable release is 2.4. I walked away with a better understanding of virtualisation than I previously had.
Virtualisation, Pratt explained, is a single operating system image creating the appearance of multiple operating systems on one system. In essence, it is chroot on steroids. Full virtualisation is the comprehensive emulation of an existing system.
Paravirtualisation is similar, but in this scenario, a guest operating system running on top of a real operating system is aware that it is not in actual control of the computer and is only a virtual machine. Xen and Usermode Linux both fall under this category of virtualisation.
The x86 architecture common to most desktop computers today is not designed for virtualisation, and Pratt described it as a bit of a pig to work with for it.
Pratt asked the question, "why virtualise?" and provided fairly straightforward answers to the question.
Many datacentres have hundreds or thousands of machines running single operating systems, often each running a single piece of software or service. With virtualisation, each one of those machines can host several operating systems, each running their own set of services, and thus massively reduce the amount of hardware needed for the operation.
Xen takes this one step further and allows clusters of virtual machine hosts with load balancing and failover systems.
Pratt explained that if a Xen virtual machine host in a Xen cluster detects imminent hardware failure, it can hand off its virtual machine guest operating systems to another node and die peacefully, without taking the services it was hosting with it. Meanwhile, people using the services may not even be aware that anything changed as they would continue more or less uninterrupted.
Using the same principal, the Xen virtual machine hosting clusters allow load balancing. If several virtual machines are running across a few hosts, the host cluster can transfer busier virtual machines to less busy hosts to avoid overloading any one node in that cluster. This allows an even higher number of virtual machines to run on the same amount of hardware and can serve to further reduce hardware costs for an organisation.
Within a virtual machine host server, each virtual machine should be contained, explained Pratt, to reduce any risk should a virtual machine become infected with malicious software or otherwise suffer some kind of problem to other virtual machines on the same server.
In order to run Xen, only the kernel needs replacing. No software above that has to be aware of its new role as a slave operating system within a larger system. Xen currently works with Linux versions 2.4, 2.6(.12), OpenBSD, FreeBSD, Plan 9, and Solaris at this point. Because guest kernels have to communicate with hardware long any other kernels, they must be patched to be aware of their parent operating system and talk to it through Xen. A guest kernel attempting to make direct contact with the hardware on the system will likely fail.
Modifications to the Linux 2.6 kernel to make it work with Xen were limited to changes in the arch/ kernel source subdirectory, claimed Pratt. Linux, he said, is very portable.
Virtualised kernels have to understand two sets of times, while normal kernels only have to be aware of one, noted Pratt.
A normal kernel that is not in a virtual machine has full access to all the hardware at all times on the system. Its sense of time is real. A second going by in kernel time is a second going by on the clock on the wall. However, when a kernel is being virtualised, a second going by for the kernel can be several seconds of real time as it is sharing the hardware with all the other kernels on that same computer. Therefore a virtualised kernel must be aware of both real wall clock time, and virtual processor time the time which it has actual access to the hardware.
Among the features coming in Xen 3.0 is support for X86_64 and for SMP systems. Coming soon to a Xen near you is the ability for guest kernels to use virtual CPUs up to a maximum of 32 per system (even if there are not that many real CPUs!) and add and remove them while running, taking hot swapping to a whole new virtual level.
While I do not fully understand memory rings, perhaps someone who does can elaborate in comments, Pratt explained how Xen runs under 32bit x86 versus 64bit x86 in the context of memory rings. In X86_32, Xen runs in ring 0, the guest kernel runs in ring 1, and the userspace provided to the virtual machine runs in ring 3. In X86_64, Xen runs in ring 0 and the virtual machine's userspace runs in ring 3, but this time, the guest kernel also runs in ring 3 because of the massive memory address space provided by the extra 32 bits. With 8 terabytes of memory address space available, Xen can assign different large blocks of memory using widely separate addresses where it would be more constrained under the 32 bit model.
The goal of the SMP support system in Xen is to make it both decent and secure. SMP scheduling, however, is difficult. Gang scheduling, where multiple jobs are sent to multiple CPUs at the same time, said Pratt, can cause CPU cycles to be wasted, and so processes have to be dynamically managed to maintain efficiency.
For memory management, Pratt said, Xen operates differently from other virtualisation systems. It assigns pagetables for kernel and userspace in virtual machines to use, but does not control them once assigned.
For discussion between kernelspace and userspace memory, however, requests do have to be made through the Xen server. Virtual machines are restricted to memory they own and cannot leave that memory space, except under special, controlled shared memory circumstances between virtual machines.
The Xen team is working toward the goal of having unmodified, original kernels run under Xen, allowing legacy Linux kernels, Windows, and other operating systems to run on top of Xen without knowing that they are inside a virtual machine. Before that can happen though, Xen needs to be able to intercept all system calls from the guest kernels that can cause failures and handle them as if Xen is not there.
Pratt returned to the topic of load balancing and explained the process of transferring a virtual machine from one host in a Xen cluster to another.
Assuming two nodes of a cluster are on a good network together, a 1GB memory image would take 8 seconds in ideal circumstances to transfer to another host before it could be resumed. This is a lengthly downtime that can be noticed by mission critical services and users, so a better system had to be created to transfer a running virtual machine from one node to another.
The solution they came up with was to take ten percent of the resources used by the process moving to transfer it to its new home, thus not significantly impacting its performance in the meantime. The entire memory block in which the virtual machine is operating is then transferred to its new home repeatedly.
Each time, only those things in memory which have changed since the last copy are transferred, and because not everything changes, each cycle goes a little bit faster, and fewer things change. Eventually, there are so few differences between the old and new host's memory for the virtual machine that the virtual machine is killed off, the last changes in memory are copied over, and the virtual machine is restarted at its new location. Total downtime in the case of a busy webserver he showed statistics for was on the order of 165 milliseconds, after approximately a minute and a half of copying memory over in preparation.
A virtual machine running a Quake 3 server while grad students played the game managed the transition with downtime ranging from 40 to 50 milliseconds, causing the grad students to not even be aware that any changes were taking place.
Pratt said that the roadmap for Xen 3.1 sees improved performance, enhanced control tools, improved tuning and optimisation, and less manual configuration to make it work.
He commented that Xen has a vibrant developer community and strong vendor support which is assisting in the development of the project.
The Xen project can be found at xen.sf.net or xensource.com, and is hiring in Cambridge, UK, Palo Alto, California, and New York, Pratt said.
Intel architect Gordon McFadden ran another virtualisationrelated talk in the afternoon entitled: "Case study: Usage of Virtualised GNU/Linux to Support Binary Testing Across Multiple Distributions".
The basic problem that faced McFadden was that he was charged with running multiple Linux Standard Base tests on multiple distributions on multiple platforms, repeatedly, and could not acquire additional hardware to perform the task.
He described the LSB tests as time consuming, taking up to eight hours each, but not hard on the CPU.
The logical solution was to run the tests concurrently using virtual machines. As a test was launched and set under way on one virtual machine on a real machine, instead of waiting for it to finish all day or for several hours, another test could be launched in another virtual machine on the same machine.
McFadden's virtual machine of choice for the project was the UserMode Linux (UML) virtual machine.
The setup McFadden and his team used was the Gentoo Linux distribution riding on top of kernel 2.6.11 and an XFS filesystem. His reasoning for using Gentoo was not philosophical, but simply that he had not used it before and wanted to try something new. The filesystems of the virtual machines were ext2 or ext3, but appeared to the host system as flat files on the XFS filesystem.
The tests were run on a 4GHz hyperthreaded system with 1GB of RAM, and tested Novell Linux Desktop 10, Red Hat Enterprise Linux 3 and 4, and Red Flag Linux. Each test case ran on 8GB virtual filesystems and were assigned either 384 or 512MB of RAM.
To setup the systems they were installed normally and dd'ed into flat files to be mounted and used by the UML kernel.
The guest kernels were instantiated, loaded, and popped an Xterm for management. Each test could then be run by logging into the xterm, starting NFS on the guest system, and running a test.
The result of the whole processes was a quickly reusable hardware platform that was economic both fiscally and in lab and desk space, though McFadden did not relate the results of the LSB tests themselves.
Using virtual machines for testing has limitations as well, McFadden noted. For one, it can not be used to test hardware, and resource sharing can sometimes become a problem. For example, if two kernels are vying for control of one network interface, performance will be below par for both.
McFadden said he had alternatives to using virtualisation to run his tests, but using boot loaders to continually be loading different operating systems meant it would have taken a lot longer with long delays when multiple tasks could not be performed at the same time. His other alternative of using vmware was to be avoided as he was already familiar with vmware and wanted to learn something new.
Following a brief thirty minute interlude that passed for dinner hour, BOF sessions began for the evening.
Among those that I attended was one entitled "Debian Women: Encouraging Women Without Segregation" hosted by Felipe Augusto van de Wiel (not a woman, incidentally).
The DebianWomen project started around DebConf 4 following a Debian Project Leader (DPL) election debate question around how the DPL hopefuls would handle attracting more women to the Debian project.
The question enticed a lengthly mailing list debate, as nearly anything in Debian can, at the end of which a new group was born called DebianWomen, with its own website by the same name.
Some research into open source projects found that the highest percentage of women in a major project appeared to be about 1.6%. At the time of the start of the DebianWomen project there were just 3 female Debian developers, but in the year since there have been 10 added to the New Maintainer Queue (NMQ, in Debian lingo).
Van de Wiel made the point repeatedly through the session that the DebianWomen project is inclusive of men and not an exclusive club. Their list and IRC channel provides a good place for people seeking help to get it, regardless of gender.
The DebianWomen project's goal is to encourage and educate the Debian community on the topic of equality and encourage women to volunteer in the free software community.
The project is seeking to help show off the accomplishments of its members, with its profiles website and is offering information on how to get involved through its involvement site.
Van de Wiel explained that he was running the session rather than one of the Debian women as many of them are currently at DebConf 5 in Helsinki, Finland and could not attend OLS this year.
The discussion touched on a recent flap at Debian over a package called hotbabe, which featured an animated woman taking off a percentage of her clothes based on the activity of the system's CPU, until, at 100%, she was completely naked. Some complained that there was no option to have the virtual stripper be male and after a lengthly flamewar on the Debian mailing lists, the project was eventually dropped as not providing anything new that Debian needed to the Debian project.
The point of this discussion though was the lack of awareness of males in the community to the sensitivities of the women around us. These actions don't serve to encourage female participation in the development process.
An issue in a similar vein to this one is the issue that a good deal of documentation in Debian refers to hypothetical developers as a male, rather than in a genderneutral sense, further adding to the implicit bias found in the development community.
Van de Wiel went on to discuss some of the things women in Debian are now doing, including working on translations into 8 languages for the project's own website and the assistance being provided to Debian Weekly News.
Outside of Malaysia, where it was pointed out around 70% of IT workers are female, there is a general cultural bias in favour of males in the field. One attendee noted that a recent study in the US found that American families typically spend four times more on their male children as their female children on ITrelated investment.
Another point made is that guys tend to enjoy studying Linux in their free time, perhaps instead of their homework, while women tend to follow their curriculum more precisely and thus are more likely to be familiar with a dominant platform.
Ultimately, more can be done to encourage more female developers to join the community, as they are certainly out there.
The final session I attended on Friday was a BOF session led by Russell McOrmond on the topic of Canadian copyright law, entitled simply "GOSLING/Canadian copyright update".
GOSLING stands for "Get Open Source Logic Into Governments".
To start, McOrmond suggested Canadians in the room who have not yet done so sign a petition on the topic of copyright law in Canada asking the Canadian government not to damage copyrights with a law they are proposing. He suggested that if MPs receive signatures on a petition on an issue like this, they may realise that there are actually Canadians who care about these issues other than the business people who stand to profit from them.
Bill C60, currently before the House, would cause the author of software to be legally liable for copyright violations carried out with the help of the software they have written. It would give copyright ownership to people who take pictures, regardless of the circumstances, including giving the copyright of a picture of tourists taken by a friendly passerby being handed a camera to that passerby. Photos contracted to be taken would remain under the copyright of the photographer who took them. The act to amend the copyright act, bill C60 is 30 pages, translated, and amends the 80page Canadian Copyright Act currently in effect.
McOrmond noted that IBM has a lawyer in Canada named Peter K. Wang actively fighting at the Canadian government for software patents in this country. He suggested that an internal debate needs to take place at IBM about whether or not they actually support software patents, especially as some IBM employees at the conference had earlier expressed their displeasure with the concept.
McOrmond referred to several URLs people interested in the copyright issue in Canada should refer to:
flora.ca/A246, goslingcommunity.org, www.cippic.ca, www.creativecommons.ca,
www.forumonpublicdomain.ca, www.efc.ca, www.digitalsecurity.ca, and
www.softwareinnovation.ca. Some American sites that deal with similar issues he listed are:
www.eff.org, www.ffii.org, www.centerpd.org, and www.pubpat.org.
A point McOrmond made a number of times is that Canadian copyright law is being influenced by a large subset of businesspeople in the copyrightconcerned community who would prefer that the Internet not exist. But with the Internet clearly here to stay, we should be working on ways to deal with copyright in a way that is beneficial to as many Canadians as possible, not just a few.
The province of Quebec has long been a stronger defender of its culture than most of the rest of Canada and McOrmond suggested it would be beneficial to the case of killing bill C60 if the province of Quebec and its dominant party in the Canadian parliament, the Bloc Québecois if they realised that the choices they are facing is between the copyright system we know and the one we see in the United States. Quebec is usually the first to act on this kind of thing and it may need to before the rest of the country catches on.
A caution McOrmond had for the library community in Canada is that asking for copyright exemptions for certain circumstances hurts everyone more than it helps the libraries. As one example, allowing libraries to exchange copyrighted information electronically as long as the information selfdestructs after a set amount of time would require running on a platform that would enforce that selfdestruction, and likely lock the library system into a version of Windows capable of the task.
Originally posted to Linux.com 2005-07-23; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 17:41 on
July 23, 2005
Ottawa Linux Symposium, Day 2
The second sitting of the 7th session of the Ottawa Linux Symposium saw several interesting, highly technical discussions. Here are my reports on Trusted Computing, the ext3 filesystem, the e1000 network driver, and SELinux.
The morning session
I attended a session in the morning called Trusted Computing and Linux. It was led by Emily Ratcliff and Tom Lendacky of the IBM Linux Technology Centre. The two presenters switched off frequently throughout their presentation.
Ratcliff described trusted computing as an industry standard for hardware to be addressed. Take peerto peer networking, for one. In theory, trusted computing could protect peertopeer file sharing networks from being infected with file poisoning attacks, where bogus files are shared in an effort to corrupt people's downloads and make them unusable.
The concept could also be used to have a user ask a remote computer to check a local terminal to see if it is clean before logging in.
Lendacky introduced the TPM Trusted Platform Module as a physical device that provides protection capabilities. The TPM is generally a hardware chip that can be described as a cryptographic coprocessor.
It protects encryption and signature keys. It uses nonvolatile memory to store endorsement, storage root, and data integrity keys and volatile memory to store platform configuration registers.
Ratcliff explained that the TPM works by a performing a series of measurements. The BIOS measures the boot loader. The boot loader measures the operating system kernel. The operating system kernel manages the applications through a software stack. At each level, the next step is checked for integrity against the TPM.
The bootloader's responsibility is to measure the kernel and configuration files prior to handing over control to the kernel. This functionality is now available for both popular Linux boot loaders, grub and lilo.
Lendacky said that kernel 2.6.12 incorporates TPM chip support. There's only meant to be one way to use TPM, he said, and that is through the software stack.
Ratcliff introduced TrouSerS, the code name for the TPM software stack. It includes an access control list that allows an admin to allow or not allow remote users to access a system's API.
There were a number of questions following the presentation. The first was "How can TPM keys be validated?"
The answer, according to the presenters, is by the user entering a password. That prompted another question, asking if passwords were the most secure option available, since they tend not to be very secure.
Ratcliff referred to the authentication system known as attestation. TPM chips are meant to be credentialed by TPM chip manufacturers. The system gets a platform credential, and together an attestation ID is achieved. In theory. Manufacturers are not keeping the public side of TPM keys and the system is not working as intended as a result.
The TPM keys have to be manufacturer verifiable to work. Trusted systems where this is particularly important are things like bank machines and automatic voting machines.
The ext3 filesystem
My first session after lunch was entitled: "State of the art: Where we are with the ext3 filesystem", presented by Mingming Cao and Stephen Tweedie.
Cao discussed the Linux/extended 3 journaling filesystem. Although it is a young filesystem, she said, it is increasingly widely used. The people working on the filesystem are trying to bring a faster, more scalable system to the ext3 filesystem.
Cao listed some features ext3 has acquired in the 2.6 kernel. Among them is online resizing changing the size of the partition without taking the drive down, and extended attributes.
As a means to fight the problem of filesystem fragmentation, Cao explained a system of block preallocation, where files can be allocated an amount of space on disk appropriate to their eventual needs and can thus hopefully remain contiguous on disk.
Cao spent a good deal of time explaining extents and related work. Extents allow delayed block allocation until more information is learned, allowing more contiguous file allocation. This is especially useful for temporary files.
Cao said the ext3 team wants to improve the ext3 filesystem, but that this could result in some filesystem format changes. Because of the nature of file systems and filesystem changes, adoption of any revisions would be likely to be very slow.
Among the work in progress is a reduction in file unlink/truncate latency. Truncating large indirectmapped files is slow and synchronous, explained Cao.
Time stamps under ext3, until recently, were updated about once per second. Ext3 had no way to store high resolution time stamps. The kernel is capable of storing nanosecond timestamps on an extended inode, but timestamps measuring only seconds on normal inodes.
Solutions proposed for ext3 are parallelised directory operations, and serialising concurrent file operations in a single directory.
The future holds more improvements for ext3 in the mainline kernel distribution, with a 64 terabyte maximum partition size coming.
Cao expected to get a copy of her presentation up on the ext2 project website soon. Questions on the presentation were answered by Stephen Tweedie, who explained that ext2 and ext3 are, for all intents and purposes, exactly the same filesystem. If an ext3 filesystem were to be mounted under Linux kernel 1.2 as an ext2 filesystem, provided it didn't exceed normal ext2 parameters of the time for file and partition sizes, it would be able to mount fine, albeit without use of the filesystem's journaling features.
A member of the audience queried why we don't just go directly to a 1024bit filesystem, citing the progression of 12 to 16 to 32 to 64 bit filesystems he'd seen in his career. Tweedie replied that any filesystem that large would be simply unmanageable, taking weeks to fsck.
A case for the e1000
Intel's John A. Ronciak presented a talk called "Networking driver performance and measurement, e1000: a case study".
Ronciak's goal in his case study was to improve the performance of the e1000 gigabit ethernet chip under kernel 2.6.
He found through his studies that kernel 2.4's performance with the chip outperformed kernel 2.6 in every test and under every configuration in terms of throughput and he thus concluded that kernel 2.6 still has room for improvement.
In its day, a 10/100 network interface card sitting on a 32 bit, 33 MHz PCI bus could bring a system to its knees from input/output overload. Today, the same can be said or done with a 10 gigabit ethernet device on a modern motherboard, noted Ronciak.
Ronciak noted that Linux lacks a decent common utility for generating performance data across platforms and operating systems, to, as he put it, compare apples with apples when measuring performance.
Lacking such free tools, he showed us data collected using a program called Chariot by IXIA as a test tool.
His results showed that kernel 2.4 always outperformed kernel 2.6 in data throughput performance, and the performance within the 2.4 and 2.6 kernels varied widely between different revisions and by whether or not NAPI or UP configuration options were used in the kernel.
NAPI, he said, is an interface commonly used to improve network performance. Under kernel 2.4, NAPI caused CPU usage to go down for the same amount of throughput, while his results found that with NAPI still, CPU usage actually went up against a NAPIless but otherwise identical kernel at the same throughput.
In measuring performance, he cautioned, the size of the frame is an important factor. With large frames, the data going through with packets is significant enough to measure, though with small packets, it is more useful to measure packet counts than actual data shoved through a connection. His slides showed a chart to emphasise this point.
Ronciak found that in his initial testing, NAPI actually caused an increase in lost packets, though with a change in NAPI weight values, packet loss could be reduced. The problem, he explained, was that the input buffer was not being cleared as fast as the data was coming in, resulting in lost packets. A driver change was required to fix the problem. He suggested that a modifiable weight system for NAPI would be useful in some circumstances, but noted that the issue is up for debate.
Among the problems Ronciak found with NAPI is a tendency to poll the interface see if there is any new data waiting for the kernel faster than the interface could handle incoming requests, resulting in wasted system resources and less efficient operation. His suggested fix for this problem is to allow a minimum poll time delay based on the speed of a network interface.
Ronciak noted that one thing he learned with this project that he could pass on is never to be afraid to ask the community for help. A call to the community got his testing and code a number of bug fixes, patches, and improvements as simple as whitespace cleanup.
His conclusions were that he intends to work to continue trying to improve network performance under Linux, but that he is looking for help in this matter. Ronciak also is looking to further improve NAPI.
He reiterated at the end of his presentation the need for a free standard measurement for network performance across platforms. He is also seeking help with finding new hardware features which could help solve some bottlenecks in network performance.
Later, back at the BOF
In the evening at a BOF session, the US National Security Agency (NSA)'s Stephen Smalley gave an update on the status of the NSA's SELinux kernel.
Smalley said that the last year has been a major year for SELinux. A year ago, SELinux was included in the Fedora Core 2 release, but was not enabled by default. Since then, he said, it has been included in both Fedora Core 3 and Fedora Core 4, and has shipped enabled rather than disabled.
SELinux can now scale to large multiprocessor systems, said Smalley, and IBM is looking to evaluate SELinux for certifications allowing the US government to use it formally.
SELinux is exploring a multicategory security system allowing users to be more involved in the security policies of the system, Smalley explained.
A more indepth look at SELinux can be had this winter at the SELinux Symposium in March 2006 in Baltimore, Maryland.
Originally posted to Linux.com 2005-07-22; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 17:35 on
July 22, 2005
LWCE Toronto: Day 3
The third and final day of Toronto's LinuxWorld 2005 had the meat I was looking for. First, I attended Mark S. A. Smith's presentation entitled "Linux in the Boardroom: An executive briefing". Next, I listened to David Senf of IDC discuss the top 10 CIO concerns with open source. And finally, I wrapped my attendance to this year's LinuxWorld Toronto with another session by the energetic Marcel Gagn� in a presentation entitled "Linux Culture Shock"
How do we combat executive blinkers?
Smith started his discussion by making the point that he had never crashed his kernel in Linux, and while some applications had crashed, the system did not go down with them. There are fewer stability issues with Linux than there are with Windows, he said.
He described Linux as a disruptive technology; it changes everything. In two years, he said, Linux has gone from a "pretty neat" operating system to becoming a very useful and important player.
Smith cautioned about some of the risks of using proprietary software, citing an example of a city's municipal bylaws. If the city stores those bylaws in a proprietary format and, years later, it wants to open them again, the reader for that file type may no longer be licensed at the city and the city may have no legal way of opening their own bylaws.
He noted that Linux is not a great option for experienced Windows power users, while it is an increasingly good option for the casual user with some Windows experience. This due to the advent of Linux systems with a lot of Windows behaviour included.
Where Linux is most useful, he said, is on the updating and upgrading of aging systems like NT5, NetWare, MS Exchange, and so forth. From that perspective, Novell was wise to buy SuSE, as its NetWare products were getting old and a Linux upgrade made sense for the entire company. Linux also makes sense for new application installations, such as a new database deployment, and where security is a concern.
On the topic of transition costs, he noted that Oracle, among other large software vendors, licenses their software by the number of processors it is running on, not the operating system. Thus a license for 4 processors in a traditional Unix being transferred to Linux would not cost any additional licensing fees, just a trade of software files.
He commented that a wholesale replacement of Windows systems not otherwise ready for upgrade to Linux was comparable to trading an entire fleet of trucks in for a fresh fleet of trucks the economic sense is not there.
Of all the major industries that use computers today, he said, there are none that have not already started using Linux.
What executives are looking for in their systems is industrial strength systems with modest customisation, a longer life cycle, and stability. They want to avoid, argued Smith, nondelivery of software or specifications, systems that are difficult to support, high training requirements, paying for incremental increases in users, and any kind of negative impact on customers.
Linux upgrades make sense, he said, for companies that are in the upswing of their business cycles. A company in a downturn would do well to stay with what they are using until they're on more stable financial footing.
Linux delivers low cost stability, security, improved performance on the same hardware, flexibility, business agility, a natural upgrade path, easy of use, and consistency. It reduces costs, he said, but that is more a bonus than a rationale for switching to Linux. He warned that focusing on the cost savings of switching to Linux could negate them as people try to improve them artificially.
Linux' strengths include rebootless upgrades and high uptimes, and very little need to reboot. He noted that there have been many a Linux server that have simply not been rebooted in the space of more than a year.
Older systems support Linux with little trouble where Windows would have trouble running, reducing hardware costs. It requires less RAM than its proprietary counterparts to accomplish the same tasks, and requires fewer administrators to administer large numbers of Linux systems.
In order for executives to be sold on a switch to Linux, they have to accept the factors he discussed and see the return on investment possibilities, largely from savings from more efficient hardware, software, and use of employee time.
The Gravity of Open Source
David Senf of IDC led a session entitled "Gravity of Open Source: Top 10 CIO concerns" in which he listed, in reverse order, the top 10 concerns of chief information officers related to the adoption of Linux and open source, based on studies conducted by IDC. The studies seek to find the questions CIOs have, but it is left as an exercise to vendors and others with an interest in deploying open source and Linux to actually provide the answers.
10. Intellectual property concerns extending out of the SCO vs The World lawsuits came in at number 10.
According to their survey, only 3% of survey respondents indicated IP issues had stopped them from implementing open source solutions, though 54% said it wasn't applicable because they weren't looking at converting to Linux or Open Source anyway.
9. What open source business models will succeed?
CIOs have expressed concern about choosing software packages or applications that will no longer be developed or supported at some point in the future. Which software will last? Sticking to larger, better known companies such as IBM, Novell, Sun, HP, and Red Hat, is, for many, the answer. These companies are driving the agenda for open source development and are useful to follow.
Senf discussed SuSE and Red Hat Linux distributions and the effect they are having on each other. He predicts that over time, the two will balance each other out and have a roughly even market share. 8. Where is open source going?
Is Linux the end point, or the means to something further? Organisations are more likely to deploy open source given Linux' success.
7. What workloads is Linux good for?
Over the last year, Senf said, Linux has grown by 35% on servers. Its main use is in information technology departments and functions, and as a basis for web infrastructure. Linux is becoming a powerhouse in high performance computing.
6. Can we have too much open source software at once?
How much open source software should we have?
There's an increase in open source use of variants of SQL database systems, and Apache is the dominant web server, among other examples. What software and how much software is needed, however, is a matter of discretion and varies wildly from one company to the next.
Who is using the software is also an important factor. Senf noted that knowledge workers tend to need open source software less because of the specific tasks they are doing, while transactional workers are more flexible and have more use for the open source systems.
Open source software can also be implemented in stages. For an example, he pointed to OpenOffice.org and FireFox which are both run primarily on Windows systems for the time being.
5. What channels are being used to acquire Linux and open source software?
Open source software other than the Linux operating system itself is generally purchased through solutions vendors, while Linux is generally acquired through normal public distribution channels such as downloads or inexpensive CDs.
4. Why are organisations and companies adopting Linux and open source software?
For Linux in the business environment:
The number one reason given is for the cost savings. Second is that it gives companies an alternative and leverage against software providers. If they are underperforming, they have an option for switching.
Third is improved return on investment.
For open source software, from a business perspective, the first two reasons are the same, but the third is protection from vendor lockin. Senf indicated that he disputes this, noting that it is no less complicated or expensive to convert from open source packages than it is from closed to open source packages.
His own rationale is that Linux deployment offers improvements in speed, performance, and security. Open source software offers improvements by offering functionality not available in closed source software.
Secondly, it is usermodifiable. Senf noted that while many companies offer this as a reason for switching to open source software, he doubts many companies actually do the modifications they are provided the right to do. Finally, security is offered as the third reason for deploying open source software in businesses.
3. What is up with the total cost of ownership debate?
CIOs, on average, estimate, according to a chart Senf provided in his presentation, that training, integration, internal and external support, and installation and deployment all cost more under Linux, but that administration, downtime, and acquisition costs are comparable or cheaper with Linux than under proprietary operating systems. How this comes out at the end will vary for each company and implementation.
2. Why are organisations and companies not adopting Linux and open source software?
1. How well does it fit in with our business requirements?
CIO concerns in this area include productivity questions, existing markets, and the maintenance of customer service, as areas that cannot be compromised for the adoption of Linux and open source.
His list finished, Senf noted that with the advent of bigvendor support for Linux, the risks of open source software are, in his words, taken out of the game.
Linux Culture Shock
I wrapped my attendance to this year's LinuxWorld Toronto with another session by the energetic Marcel Gagn� in a presentation entitled "Linux Culture Shock". Gagn� started his discussion with a comment about avoiding playing with the Linux kernel. He said it is no longer necessary to tweak or compile a Linux kernel. The stock kernels are more than adequate.
He touched on the fear of spending money for retraining and the myth of Linux desktops not being useful.
In an effort to show how this is indeed a myth, he began a quick tour of the Linux desktop under KDE, right clicking on his desktop to show the configuration menu and other features familiar to users of Windows.
Mozilla FireFox, he said, is not perfect software, but it is more perfect than the proprietary Internet Explorer. He reminded the audience that the Department of Homeland Security recommended against running IE, because it is so insecure. OpenOffice.org 2.0 (still in beta), more than its predecessor versions, is very close to Microsoft Office. And Gaim, he said, is a good option for instant messaging solutions for users transitioning from Windows.
Regarding Gaim, an audience member asked about deliberate protocol breakage on the part of instant messaging service providers. "The open source community is nothing if not resourceful" was his answer.
Some time ago, he related, Microsoft broke the MSN protocol to break nonauthorised clients. It took a couple of days, but the open source developers were able to reverse engineer the protocol and restore the open source messaging client's ability to communicate via the service.
Unlike ICQ, AIM, MSN, and Yahoo instant messenger, Gagn� noted, the Jabber protocol is an IETF (Internet Engineering Task Force) standardised protocol. Both client and server software is available for the Jabber protocol from the open source community, from jabber.org.
Gagn� opened the floor to the audience to tell of things they are hearing in the way of resistance to switching to Linux, and offered responses to the various concerns. The first question was on the easy or not so easy installation of video card drivers for Linux.
He answered that this problem is no longer a serious issue and hasn't been for several years. With the exception of some of the latest bleeding edge video cards, while they are still bleeding edge, most video cards are supported properly in Linux. All video cards that he is aware of in the last several years support the VESA standard and should at minimum work under Linux.
Question number two was how to address the issues of total cost of ownership and support. Support is the question of "If something goes wrong, who do I call?" he said. With large companies supporting Linux, they will be willing to sign support service contracts with companies that want them. IRC channels and mailing lists are the perennial free support option, and are often faster and certainly cheaper than any Microsoft support requests.
From a total cost of ownership perspective, he said, Linux requires fewer people to manage it and has fewer problems in operation. Because of Linux' networking roots, it is possible to fix a lot of problems remotely and the cost of travel to fix broken systems is dramatically reduced with the use of Linux.
Problems like email viruses cannot be discounted from total cost of ownership calculations, he pointed out, noting the ILOVEYOU mail virus some time ago which brought hospital computer systems, among others, to their knees for days as they tried to recover.
Why, an attendee asked, are there no viruses for Linux? While noting that some viruses had been written in perfect circumstances as proof of concept cases, Gagn� discussed the history of Windows versus the history of Linux, with the latter having been developed on the Internet in an environment where security is a concern. Windows, on the other hand, was never initially developed with security in mind.
Windows' tendency to overuse the administrative accounts and its memory sharing policies increase its vulnerability to security issues like viruses. He cited a honeypot project report which suggested Windows systems, out of the box, put on the Internet had a life expectancy of about 4 minutes, while a Linux box put onto the Internet could be expected to last about 3 months before being compromised.
A discussion ensued in the room about the possibility of viruses becoming a problem on Linux should it become the dominant operating system. While Gagn� acknowledged the possibility, he described the logic of not upgrading to Linux because of that possibility as strange. Why, he asked, would anyone refuse to upgrade from an obviously insecure system to one that might potentially become so later because of that potential?
The next question was about training. His answer to the problem of training users to use Linux desktops is simple. Sit down with it and use it. He described the transition from Windows to Linux as no more of a culture shock than the transition from Windows 98 to Windows XP. Both require some new learning and getting used to.
And that's a wrap
LinuxWorld Toronto is as of this year a member of the LinuxWorld show family, rather than an independent operation as it had been in previous years when it was known as RealWorld Linux. It was a fairly good show. The second day felt a little marketing droidy, but it is difficult for show managers to know in advance what the quality and nature of presentations will be. The first and third day, and some of the second day sessions, showed that a good focus could be kept, and that the show can be useful for Linux' continued growth.
Originally posted to Linux.com 2019-04-21; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 17:17 on
April 21, 2005
LWCE Toronto: Day 2
TORONTO LinuxWorld Day 2 started at 08:30 with another round of sessions. The day was broken down into onehour blocks; I attended several, starting with DeeAnn LeBlanc's presentation on "Linux for Dummies" and keynotes by HP Canada's Paul Tsaparis and Novell's David Patrick.
When I entered the session room, LeBlanc had already started and was finishing up an explanation of what Linux distributions are. She said that a common question she gets is "Which Linux distribution is best?" Her answer was short and to the point: "Whichever one you want."
The key question, she explained, is not what distribution is better than what other distribution, but what is it you want to do with it? Most Linux distros can act as either a desktop or a server, but, she noted, it is important to not overload a desktop with excessive server software. She listed a number of distros as viable options for new users, citing Linspire and Xandros as the superbeginner firsttime Linux user distributions; Fedora (Red Hat), Mandriva (formerly Mandrake), and SUSE as good beginner distributions; Ubuntu is a little more advanced "but still braindead to install."
Debian: Not for beginners
For servers, she suggested Fedora, Mandrake, or SUSE, noting that Linspire and Xandros are really only desktop distributions and are not recommended as servers. Debian, she warned, is not for beginners; however, it can be through Debianbased distributions Ubuntu, Linspire, and Xandros.
Where should someone looking for a Linux distribution look? She listed distrowatch.com, cheapbytes.com (to order CDs rather than download them), and frozentech.com's Live CD listing.
Immediately following this presentation was the first of four keynote addresses at the conference, primarily delivered by conference sponsors rather than visionary or community speakers. This one was by Novell's Vice President and General Manager for Linux and Open Source, David Patrick. He joined Novell when it bought Ximian, where he had been the CEO. His keynote was entitled "The Dynamic Role that Linux plays in the Enterprise."
He started by saying that Linux adoption from Novell has had massive support from independent software vendors (ISVs), with more than 500 having signed on. Patrick stated that he hoped we all use FireFox. Firefox, he said, is the first real open source application to gain significant market share against Microsoft's products in its own turf.
Patrick estimated there are approximately 3 million production Linux servers in the world today and an additional 10 million Linux desktops. There are 70,000 projects currently hosted on Sourceforge.net, he pointed out, and applications are the key to Linux' success. Venture capital funding is returning to Linux, and open source companies for the first time in years. He estimated that there are currently 30 fresh venture capital funded open source software projects. As the market grows, the need for money, applications, and more projects will as well.
He cited a CIO Magazine study in which a majority of CIOs are adopting Linux for some purpose within their companies. He advised the audience to be aware of and immune to Microsoft FUD (fear, uncertainty, and doubt), noting that the software behemoth has 66 fulltime people finding ways to fight Linux.
Patrick gave an update on the status of Novell's internal upgrade to Linux. Twelve months ago, he said, Novell canceled all new contracts and did not renew existing contracts with Microsoft. By the summer of 2004, the entire company had switched from Microsoft Office to OpenOffice.org on either Windows or Linux. By November, 50% of the computers at Novell dualbooted Linux and Windows. By this summer, Patrick expects Novell will be 80% singleboot Linuxonly. With the switch of their services to Linux, Patrick reported performance improvements of around 160%. Detailed statistics either are, or should soon be, on Novell's web site, he said.
I took a break for lunch in the media room and soon found myself in the midst of a press conference by CryptoCard in the media room announcing a Linuxsupporting version of their security system. After a few minutes, I headed off to Jon "maddog" Hall's "Visionary" presentation on "Free and Open Software: Back to the Future."
Maddog: 35 years of computer experience is all he has
Maddog insisted that he disagrees with the term visionary. A visionary, he said, is what people who won't be there in five years to see the results are called. He said all he has to go by is his 35 years of computer industry experience.
Open source, he offered, is not a new idea. It was the dominant form of software distribution from the inception of the first computers in 1943 until 1980. Software was distributed in source code format only, and when you bought software, it was yours to keep. Commercial software was, at one time, developed on contract. A company that needed software would hire a company to write it. If it did not work, was late, buggy, or poorly documented, the company writing it would simply not be paid.
A lot of what maddog said is a rehash of things he has said in the past, with little information different from his keynote speech a year ago.
The second keynote of the day was by HP Canada President and CEO Paul Tsaparis, speaking on "Linux for the Real World: Leadership through Innovation." He started by telling the audience that "it's about choice," a theme he came back to again and again in his address. Open source solutions, he said, are growing in the enterprise. It is providing a huge business transformation opportunity.
At this point, he started a marketing video showing HP's cooperation with DreamWorks to make animated movies such as "Shrek" and its sequel using what amounts to HP's cluster rental service. After the video, he explained that, ultimately, operating system choice comes down to a matter of straight economics for Linux and HP as much as for any other company or operating system. Linux, he said, is a $37 billion industry in the U.S. and a $218.5 million industry in Canada. Fourteen percent of HP servers are now shipped with Linux, said.
Internally, Tsaparis said, HP decides its Linux by the same market principals it suggests other companies use. At the moment, some 13,000 devices within HP run on Linux, including 160 Linuxbased DNS servers and 12 enterprise directory services. HP's wireless network is also run on Linux, and its internal corporate messenging is based on the open source Jabber system. At the moment, 2,500 HP employees develop open source software and the company produces some 150 open source products, he claimed.
Does Linux need a killer app to thrive? No, he said, it's all about the services and support available. He then wrapped up his flashy OpenOffice Impress presentation with a second short video on HP and DreamWorks cooperation, and finished without taking any questions.
IBM presentation turns out to be a marketing ad
After the keynote, I went off to see about a session called "Exploring the Use of Linux in High Performance Computing Environments" presented by David Olechovsky of IBM's System Group. His flashy presentation was clearly intended as a marketing aid to sell IBM Blade servers and was of little interest as far as the actual use of Linux within a highperformance computing environment.
He broke down operating systems down into five layers:
the application layer
the subsystem layer
the OS layer
the virtualization layer
the hardware layer
The first two pretty well stay the same, he said, but the remaining three layers can vary. This is, he said, the power of Linux. Linux is still Linux regardless of the operating system version, virtualization layer, or hardware, and that makes it a useful platform.
Olechovsky indicated that IBM would be releasing AMD Opteron based blade servers later this month.
Originally posted to Linux.com 2005-04-20; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 17:10 on
April 20, 2005
LWCE Toronto: Day 1
The first day of Toronto's LinuxWorld Conference and Expo was made up of a pair of 3hour long tutorial sessions on various networking and Linux related topics. I selected from among the list of available sessions System & Network Monitoring with Open Source Tools for the morning and Applying Open Source Software Practices to Government Software for the afternoon. Unfortunately, the latter was cancelled at the last minute and I went to Moving to the Linux Business Desktop instead.
The System & Networking Monitoring with Open Source Tools session was run by Syonex's John Sellens, who described himself as a reformed accountant and 19 year Unix system administrator.
In explaining the importance of monitoring, Sellens described running services without monitoring as simply running software, but with monitoring, they become a service. Sysadmins, he said, must monitor.
Monitoring today, he pointed out, is a different game than it was when he started in the field nearly two decades ago.
At the time, most people operated on terminals logged into a single mainframe. If there was an outage, chairs would be pushed back, people would start talking, and the noise level would go up. The system administrator would see this and fix the problem. It would be fixed nearly immediately, and everyone would go back to work without really ever knowing what had happened.
With the old system, all the system's users would go home at night without taking their work with them.
Availability was a 9 to 5 issue in a lot of cases, with only tasks that did not involve the users running at night. Now services have to be up around the clock as people work from home, using evening hours to browse Web sites, check their email, and for countless other tasks. Many of us, he said, have what he called "externally facing infrastructure."
Sellens' recommended solution for the vast majority is, he said, a near religious issue. He recommends SNMP Simple Network Management Protocol for monitoring.
Monitoring, he explained, is important because it allows administrators to detect problems before they become serious.
For example, if a hard drive is nearly full and monitoring picks up on this fact, the administrator has enough time to acquire an extra hard drive or some other form of additional space. Thus the outage that would have taken place when the drive filled completely is averted.
Monitoring can be broken down into 3 main groupings, argued Sellens:
Exceptions are when a problem occurs. Monitoring for exceptions is watching for problems such as website content not being what is expected, mail service being down, or any other problem where something is not quite right.
History is the maintenance of logs for the purposes of dealing with law enforcement, should that ever need to happen, billing for service usage, and monitoring service level agreements for the purpose of service refunds for outages.
Not every system needs the service history kept, he noted. It depends on what services are provided and what an administrator intends to do with logs.
Trend monitoring examines the history of the services to predict the future. They're a means of understanding how systems operate or change over a period of time in order to predict upgrade needs. It also shows what the normal behaviours for a system are.
He cited an example of a friend who ran an epostcard Web site. Over the course of a couple of weeks, traffic began to inch ever higher on his site, culminating in a large peak on February 14th Valentine's day after which point there was a significant drop in traffic. This, he explained, was an example of trend monitoring.
Sellens commented on a question frequently asked: What should one monitor? His answer: Anything you care about.
His longer answer was that anything that needs any kind of availability should be monitored. But he advised caution, as too much monitoring is no better than too little. Having too much information, he said, is no more helpful than having none at all, as it leads to increases in false alarms and a glut of information that will never be completely analysed. He advised starting small: Add one tool to monitor the systems. Use it and see what else you need, then add that.
Monitoring, he said, is not about intrusion detection. It's a means to ensure services are provided.
Intrusion detection can, however, be a side effect of the monitoring, but generally is a field unto itself.
Aside from more esoteric uses like tracking pool temperatures and soft drink machine can counts, Sellens listed a wide variety of potential uses for monitoring systems, including, but not limited to disk space, CPU, memory usage, memory availability, system uptime, network connectivity, UPS (power) status, mail queues, and many others.
The rest of the session outlined various software packages, including MRTG, ,Cricket, and RRDtool, which are usage analysers and graphers. He also mentioned Nagios, which monitors, attempts to fix, and in the last resort can alert an administrator of the problem.
Marcel Gagn� led the afternoon session in a discussion called Moving to the Linux Business Desktop about how to encourage businesses to move to Linux. A marked contrast from John Sellen's 155page PowerPoint presentation and printout, Gagn� used only a couple of slides and did not even refer to them, opting instead to make it a largely interactive and nonlinear discussion on showing businesses the road to Linux.
To start out, Gagn� asked his audience how many people use Linux servers at work. A solid majority of the room responded positively.
How many, he asked, use Linux desktops at work?
For that, only a couple of people were able to respond positively.
I attended Gagn�'s session on the same topic a year ago, and the changes that have taken place since then are remarkable.
He started with the basic question: Why should a company upgrade from Windows to Linux?
And he gave the basic answer: Virus security and costs.
And how should they go about it?
Start with transition software. GNOME and KDE, he said, are both good graphical environments, but he recommended KDE for users coming from Windows because of its particular look and feel.
According to Gagn�, the critical transition applications to get business people started on the transition path are:
He demonstrated the features of OpenOffice.org 2.0 beta, including an option in the word processor to save directly to PDF, bypassing the need for any Adobe tools. He showed a spreadsheet replacement for MS Excel as part of the suite, and mentioned that there is a Microsoft Access replacement available. Unfortunately, it does not read Access files; it only works the same way from the user's point of view.
Firefox, the Web browser, is essentially the same in both Linux and Windows and is an important step in the transition to Linux. Allowing users to change one application at a time keeps it from being overwhelming.
Gaim, the instant messenger client that supports several messengers with a reduction in popups and general noise from its Windows counterparts could, he said, provide a logical step for transitioning users from those Windows packages.
Thunderbird, a mail client, is a complete replacement with a similar look and feel to Microsoft's popular but virusridden Outlook mail client.
All these tools run in Windows and allow users to get used to Linux programs which they will then find are still there and familiar after a complete transition.
For Windows applications critical to a user in an office environment, he recommended Win4Lin, Crossover Office or VMWare, depending on the particular needs and budgets of the few in an office who actually need the software. VMWare, for example, is best for developers who need to work in multiple operating systems at one time, but is very expensive compared to the others.
The floor was open to questions for the entire session, giving it the feel of a 3hour long BOF session rather than a tutorial, and the questions started early on.
The first question was whether or not Mozilla Thunderbird worked with Microsoft Exchange.
Gagn� said he wasn't sure, but suggested that following the transition to Linux, users could continue transitioning by switching to Evolution, which has a plugin available to interact with Microsoft Exchange.
One of the myths many business managers retain is that Linux is poorly or not supported. He noted that with the backing of large companies such as Novell, HP, and IBM, companies can contract support for Linux as needed. In addition, they can continue to rely on community support. He also noted that when there is a problem with Linux relating to security or bugs, it is often fixed as soon as it is discovered, setting it aside from its larger proprietary cousins.
Gagn� reasoned financially that the savings in Microsoft Office and Adobe licenses achieved by using OpenOffice.org could pay for much of the transition costs from moving from Windows to Linux.
Another cost saving he brought up, discussed at greater length in his discussion last year, was the use of terminals and a terminal server. This solution allows businesses to only have one large server and several easily replaceable thin clients. A side benefit of this approach is that it allows people to have access to their own desktop at any desk, not just their own.
In KDE, Kiosktool allows administrators to limit the abilities of terminal client users (or any KDE user, for that matter) to limit the destructive capability of users and ensure that the work they intend to do can get done.
A member of the audience asked Gagn� if this wasn't just a return to the old ways of a terminal server and terminals for each user. Sure, he argued, but now our servers and networks are strong enough to handle them properly.
Another member of the audience was concerned by an apparent dislike by many new users of the concept of multiple virtual desktops. Gagn�'s response was, while they can be disabled so that new users don't have to encounter them because most people dislike them at first, in a short time those same users will hurt anyone who tries to take them away.
Among the other questions asked by an inquisitive audience were:
How is Linux multimedia support?
What is available for 3d modeling?
What other desktops are available besides KDE?
What is available for accounting software?
How is Oracle's Linux support?
What kind of training is available for new Linux desktop users.
For the multimedia questions, he referred to a KDE program called Kino for editing digital video. For editing sound, he mentioned Audacity and a program he is less fond of, called Resound. He did not go into great detail on either. He also noted the existence of digiKam to talk to digital cameras from Linux.
For 3D modeling, he mentioned but did not demonstrate a program called Blender. He noted that all major computer animated movies coming out of Hollywood are now done using Linux clusters, thus making the point that Linux is eminently qualified for the job.
A member of the audience noted that they were using a window manager called evilwm, which is a very simple, low overhead desktop manager. Gagn� used the opportunity to demonstrate IceWM, another small window manager, which he recommends for use with thin clients because of its low overhead. He estimated that KDE uses about 60MB of RAM per instance, while icewm uses only 1MB.
For accounting software, he recommended the popular GNUCash program or the proprietary but free for one person Quasar accounting package. KDE, he said, provides KMyMoney, and he mentioned that several other commercial accounting packages are being ported to Linux.
Oracle, Gagn� explained, is a strong Linux backer and is now releasing versions for Linux ahead of versions for Windows of its software, making Linux its primary release platform.
Last but not least, he said that there is the Linux Professional Institute, a companyindependent body for certification for Linux administrators, and a variety of perdistribution or percompany based certifications, but there is no formal desktop training that he is aware of.
Gagn� said that he is not a big believer in certifications, preferring training and courses. For the time being, though, the best desktop training is simply to use it.
He finished off with a quick game of PlanetPenguin Racer a game much like the Windows 3 SkiFree, where Tux Linux' very own penguin goes down a ski hill trying to avoid trees and eat mackerel. The presentation ended on impact with a tree.
Originally posted to Linux.com 2005-04-19; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 21:00 on
April 19, 2005
Ottawa Linux Symposium day 4: Andrew Morton's keynote address
The Ottawa Linux Symposium wrapped up its busy 4 days with a 6hour long bar party at the Black Thorn Caf� across the street from the American Embassy in Ottawa. And for some, it was that social aspect that they came for. For most attendees, though, stable Linux kernel maintainer Andrew Morton's keynote address was the highlight of the day. Previous NewsForge OLS coverage: Day one; day two; day three.
LSB testing tools
The last day started with a session by Stuart Anderson on the Linux Standard Base's testing tools.
As discussed on the 3rd day of OLS, LSB has a set of required base design for all flavours of Linux to allow all independent software vendors to run on top of any Linux, regardless of distribution, provided it is in compliance with the LSB's standards. In order to test compliance, the LSB releases software and tools.
Anderson described the LSB as a binary implementation of the POSIX source API standards.
One of the challenges for the LSB said Anderson is that the LSB's tools need to hook into existing applications to test them while they are running without changing their behaviour and thus nullifying the test. To do this, he suggested three possibilities.
The first is a program called abc, simply "a binary checker". This system works by changing library names in the compiled program to its own libraries which are substitute libraries with the same functionality but various additional checks to make sure the code works properly. Because this system requires the modification of binaries, Anderson said it was out of the question. There was a risk that an independent software vendor would run their software through the tests and then distribute their compiled programs with the modifications, thus breaking it completely for all end users.
The second option he proposed was the use of ltrace. ltrace follows a program as it runs and examines what it is doing from the sidelines. Unfortunately, Anderson said, ltrace does not support all the library calls required by the LSB specifications.
The final option he suggested was fakeroot. Fakeroot, he told the audience, pretends that calls to the system have been done and returns friendly but entirely untrue information to the program calling it that yes, that system call has been done. It intercepts the calls and can be used to trace them to ensure nothing LSB noncompliant takes place without having any real effect on the system.
While Anderson went into very deep detail about the technical aspects of the verification process, the important points were clear.
The code tests administered by the LSB's selftests do not cover any problems outside of the LSB specifications. If the test runs across a serious problem, it is not the responsibility of the test code to warn the developer running it that there is a problem, unless it violates the LSB's specifications. The tests validate data types and data for conformance with LSB specifications.
The LSB spec only allows a small handful of ioctl()s. These are ways for programs to talk directly to hardware, however they can change from one kernel version or hardware platform to the next and are not advised except in a small number of cases. LSB forbids the use of ioctl()s which are specific to any piece of hardware.
Sometimes applications being tested, however, do not directly call these ioctls, said Anderson, but functions that they call call ioctl()s in turn. This makes compliance more difficult for some software vendors.
The LSB test software is about 80,000 lines of code across 2000 files on the first layer. The second layer, which is the meat of the actual tests, is about 100 files and 15,000 lines of code.
The LSB test package still has a lot of work to do, Anderson said. The performance hit of the tests on the software being tested needs to be quantified, for one, though the list of technical details that need work is longer.
Multiple architectures are now supported, said Anderson, by the LSB. This TODO list item has been completed allowing the Linux Standard Base to apply to more than just x86 systems.
He concluded by inviting all to join the LSB real time discussion channel at irc.freestandards.org in channel #lsb.
Conary: An alternative packaging system
Eric Troan of Specifix ran the last formal session of this year's Ottawa Linux Symposium in room C on an alternative packaging system developed by his company.
Conary is designed as a distributionwide or even multiple distribution package management system. As Troan explained it, it essentially takes the best of rpm, CVS, and gentoo and combines them with some more features into a comprehensive revision control system and package manager.
The entire system is released under the IBM CPL, an OSIapproved license which Troan described as being similar to the GPL.
Troan explained that the Conary package management system had been released just one week previous to his session. Its fundamental purpose is to allow entire distributions of Linux to become fully customisable so that a distribution of Linux can be created for internal use within a company, or for a niche market, with a minimal hassle.
The decision to release all the code they have spent the last several months working on, he said, was a difficult one. Their business model is not the selling of the distribution system as a package or the service of hosting a package repository for customers, but Specifix plans to build its own distribution which will be specifically designed to be highly customisable and sit inside the framework they built first. This distribution and distribution management system will then be sold to companies, particularly embedded companies, seeking a way to manage their own distributions.
Conary, he explained, is meant to be to distributions what bitkeeper has become to the kernel.
Fighting GPL violations
After lunch, Linux firewall package netfilter contributor Herald Welte described his efforts at fighting corporate violations of the GNU General Public License in his native Germany.
His discussion was a BOF session, interactive by nature. He said he is has been making a living as a free software developer since 1997 and is a code author, not a lawyer.
But Welte is not satisfied with the way the Free Software Foundation has been handling GPL violations and has taken it upon himself to do it the way he sees as properly.
Welte said that the FSF's approach to GPL violations is to approach companies that have been found violating the GPL and quietly negotiate with them to stop violating the GPL on their current product to little public fanfare or scrutiny. He said the FSF's approach has caused many such cases to go completely unnoticed.
With the system the FSF is using, said Welte, companies have no incentive to stop violating the GPL. If a company violates the GPL and negotiates with the FSF to stop, by the time they agree to stop the product is done and they're gone on to the next one which could also violate the GPL. Then they can go through the whole process again without really losing much.
Welte said that in Germany he can only act on cases that affect his code directly. Should he find iptables/netfilter Linux firewall code in a proprietary device such as a router, his first course of action, he said, is to reverse engineer the device, an inherent right still present in Europe pointed out an audience member, to confirm the violation.
The second step is to provide notice to the company that they are in violation of the GPL and inform them that injunctive action will be taken if they do not attempt to remedy the situation.
The third step is, said Welte, to wait and see if the company responds.
The final step is, if the company has still not responded, to go to court and seek a preliminary injunction against the product that is violating the GPL, preventing it from being shipped in Germany effective immediately until the problem is resolved, and subjecting the company to fines should they continue to ship the product.
He cited an example of a recent company called Sitecom which ignored all his communications until receiving a notice from the court bailiff that there was now an injunction against distributing any of their products. Only then did the company even hire a lawyer to solve the problem. The company responded by releasing the modified source code to the device they were selling but insisted on appealing the injunction anyway.
The long and the short of his presentation was that it is necessary for preemptive measures to be taken by GPL coders, such as not correcting typos in code, to make it easier to find code violations.
If you do find a GPL violation, says Welte, don't go to the media with it, go to a lawyer with it. In order to get a preliminary injunction, there needs to be evidence of urgency, and in order for that to happen, the court needs to feel like they were the first people to be contacted after the offender themselves.
Andrew Morton's keynote address
The last day of the conference saw the traditional keynote address.
This year, last year's keynote speaker Rusty Russell introduced the keynote speaker for this year, Linux' very own Andrew Morton, the current maintainer of the stable release of the kernel.
Morton's speech addressed the reasons for inherent monopolisation in the system software markets.
Independent software vendors and hardware manufacturers are both interested in having one stable platform for which to release software or hardware and drivers, he said. As a result, both software and hardware manufacturers look to one or a small number of players as being relevant in the operating system market.
In order for a new player to get into the market, as Linux has done, the new player has to develop all the hardware drivers that would normally be vendor supplied for themselves, and new software has to be written across the board.
As a result, new operating systems and operating systems companies seldom show up or survive once they do.
Morton went on to state that he does not believe that the Linux kernel is likely to fork simply because of the volume of work that would be necessary for a forked kernel project to maintain their code base.
The Linux kernel decision making process, he said, is a consensus based process. It would take a serious rift to get to the point where Linux could actually have a serious fork.
Morton made a point of thanking Richard Stallman for his work and for being consistent throughout the years in his viewpoint and actions. We owe him a lot, he said.
Morton indicated that companies have been offering up developers to the free software community, and those developers are often becoming hooked on the whole free software notion. These developers, he pointed out, will eventually become managers and will have a better understanding of the free software world.
"World domination is proceeding according to plan," he concluded.
Asked questions after his speech, Morton suggested he did not believe that Linux would ever necessarily change to a 3.0 kernel version. As the project moves forward, the need for major changes decreases, and thus the need to change the major version number from 2 to 3 drops.
Following Andrew Morton's keynote address, CE embedded Linux group gave away 3 door prizes, 2 of which were Sharp Zauruses.
To wrap up the formal part of the conference, the audience gave a warm thank you to the conference's organiser, Andrew Hutton of Steam Balloon, who has, with a number of volunteers for the last six years put on the Ottawa Linux Symposium and it has become a central part of the development and discussion process for Linux developers the world over. In recognition, Andrew Hutton received a standing ovation at the end of OLS, the only one during the conference.
Originally posted to Linux.com 2004-07-25>/a>; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 17:00 on
July 25, 2004
OLS Day 3: Failed experiments, LinuxTiny, and the Linux Standard Base
More news from the OLS by our man on the ground, David Graham. Graham reports that Day 3 began with a presentation by Intel's John Ronciak and Jesse Brandeburg on writing Linux kernel drivers for gigabit and ten gigabit network interface cards.
That's too much data
Ronciak told the audience that gigabit and ten gigabit networks have some issues with the kernel's throughput handling of the network traffic. Means need to be developed to increase the efficiency.
Brandeburg told us that their attempted solution to the problem was a method called pageflipping. A page is a fixed amount of memory and page flipping means that network traffic coming in would fill one page before being 'flipped' over to the application expecting it. The idea is that the buffering into pages should improve the efficiency as the kernel is only doing one thing at a time.
FreeBSD, Brandeburg said, already implements such a system in their kernels, however Brandeburg's implementation failed to be any more efficient than the existing Linux networking code, and actually came out to be less efficient in all test scenarios.
Tiny is better
Later in the morning, Matt Mackall talked about the LinuxTiny project.
LinuxTiny is a project to reduce the size of the Linux kernel and its footprint or size in memory, for use on older legacy hardware, or embedded systems.
Mackall explained that the Linux kernel has become bloated over the last ten years and has a lot of room to shrink. He explained very rationally how this came to pass.
Linux kernel hackers got jobs.
In 1994 the kernel was at version 0.99 and could happily run on a 486 SX running at 16MHz with 4 MB of RAM.
By this year, 2004, the kernel had arrived at version 2.6 and could happily run on a 1024 node Intel 64 bit (ia64) architecture cluster with multiple terabytes of RAM.
When Linus got his job at Transmeta, he was suddenly entrusted with a computer running with 512M of RAM and various other improved features over the older hardware he had been developing Linux on.
Memory use and disk use become less of a priority, and functionality and features took the forefront.
Over the period of 1994 to 2004 there was a huge growth in personal computing and Internet use, and a constant massive reduction in hardware costs. Coupled with Moore's law of ever accelerating hardware, this led to the loss of the concept of running Linux on small, old systems.
Linux has grown, Mackall went on, one small change at a time. Eventually lots of small changes adds up to large changes, significant improvements in various performances, and increased size.
Mackall's project, LinuxTiny, aims to reverse this trend. He noted that it was nice to be scaling in the opposite direction for a change.
The means LinuxTiny uses to get to its small memory footprint and kernel image size ends is a radical trimming down of the kernel's... less necessary features.
Mackall described the various steps he took to reduce the kernel's size by removing extraneous code, wasted memory, and unneeded text output. His stated goal was to run a small Linux system whose sole purpose in life was to run a web server with no bloat.
He has so far reduced the kernel to a 363 KB image, significantly below the 1.9 MB image found in a default compile of kernel 2.6.5, and comparable to the 1994 kernel 0.99pl15 image size in Slackware 1.1.2 of just 301 KB. His memory consumption is down to less than 2 MB, and together this means that Linux Tiny can run on embedded systems and legacy hardware efficiently, as it did before Linux kernel hackers got jobs.
OpenOffice is open for developers
Michael Meeks of Ximian, now owned by Novell, gave a presentation called the "Wonderful World of OpenOffice.org".
OpenOffice.org, he said, is sexy, but it still has a lot of work to be done.
OpenOffice.org, he said, needs more developers. Stop working on GNOME and KDE Office, he implored, they served their purposes, there is now a viable open source office suite and it's OpenOffice.org. Sun, he said, has done it right by releasing StarOffice as OpenOffice.org under a proper open source license and supporting and developing it within that framework.
He gave a comparison of sizes of various packages in terms of the total file size of the tar.bz2 archive files. KOffice, he pointed out, has less code than the Linux kernel, which in turn has a fraction the code of all of KDE, which in turn has less code than OpenOffice.org.
He explained that Sun has released OpenOffice.org under a pair of licenses. One is the lesser GPL, or LGPL, and the other is the SISSL which allows software to be modified and the source not redistributed, provided the API and file formats are not affected.
If SISSLlicensed code has been modified and binaries released that change either the API or file format conformity with the original code base, the source code must be released.
Sun's biggest problem with OpenOffice.org, said Meeks, is that it is stuck on a retailoriented boxed set mentality. Every 18 months a new version must be on store shelves, which means 9 months of a release cycle is creating new features, and the other 9 months of that release cycle is debugging.
Thus a new feature implemented immediately after this spring's OpenOffice.org feature freeze will not be in the package found on store shelves of OpenOffice.org until the end of the next release cycle some 27 months later or as late as late 2006. Two years between the introduction and implementation of a feature is too long to keep developers' interest or be usable as a new feature, he pointed out. This is not the way to do it.
He also believes that Sun lacks any real understanding of how to create a community. A community, he said, cannot be manufactured, as much as Sun tries. High mailing list volume does not make for an active community. Most of the developers of OpenOffice.org are fulltime Sun employees assigned to the task.
While there are a few outside of Sun, the numbers are low, and Meeks actively encouraged members of the community to join the effort.
He said that he would be willing to hold the hand of any developer who wishes to step into the OpenOffice.org developing fray. There is a lot of work to do on OpenOffice.org and there is a hunger for new developers to help with the work.
Anyone who wants to join, he said, should see ooo.ximian.com/lxr/ where information and examples on developing OpenOffice.org can be found. He demonstrated a simple singleline code change that shifted the layout of some buttons within the OpenOffice.org interface and suggested that being able to see real effects of changes would perhaps be a better motivator to wanting to develop.
Every 3 months, OOo lets out a minor release, he said. Unfortunately, minor releases require small sizes and thus a bug in a low level part of the program that requires across the board changes must wait for the full 18 month release cycle to take effect. That, combined with inconsistencies between distros adds to the complications of regular releases of the OpenOffice.org office suite.
He believes that OpenOffice.org is the only viable office suite and that more programmers need to help out, a point he drove home repeatedly.
He outlined some of the changes that still need to be made.
Lots of polish is still required to improve the suite. There are a lot of small, but bighitting changes left that need doing, and many of the performance problems are not hopeless they just need someone working on them.
The LSB and me
In the evening, Mats Wichmann, chair of LSB, and Dirk Mohndel, Intel director of Linux and Open Source strategy, hosted a Birds of a Feather (BOF) session discussing the Linux Standard Base project and its impact on Linux and independent software.
Dirk pointed out at the outset that if the LSB were to dictate a system for upgrading packages within the Linux Standard Base, all the distributions would balk at the loss at the one form of lockin they have, and while he doesn't blame them it is their business model it is such things that provide constraints to what the Linux Standard Base can do. Besides, he said, distribution lockin is mostly a psychological problem, not a technical one.
The Linux Standard Base is a common set of requirements for all Linux distributions that can be used instead of requiring particular distributions at particular levels to run software.
Currently in development is LSB version 2.0 which will be a certificationbased system. Software that is LSB 2.0 certified will be certified to run on any Linux distribution that is fully compliant with LSB 2.0's base requirements.
The requirements specify in detail what software is available at the lowest level of the distribution and what package formats must be supported namely RPM.
By using the Linux Standard Base, it will no longer matter what distribution end users are running to govern what software they can run on top of them. As long as the software is complaint with the Linux Standard Base, the software will run.
The LSB does not cover Embedded Linux, the standards for that are set by the Embedded Linux Consortium. LSB attempts only to address the needs of desktop and server Linux.
LSB version 1.3 has been out for some time and its shortcomings have prevented it from going through the process of having a certification system set up. Using the drawbacks and lessons learned in the 1.3 version, the largely hardware companyfunded LSB project is working toward version 2.0 which is currently at the state of release candidates, and should be released relatively soon.
The LSB project is not expecting perfection, they said, with version 2.0, but it is a base from which to work to make a perfect version 2.1 with anything they forgot. Evidentially, they do not yet know what will change in 2.1 as they have not yet remembered forgetting it.
The LSB does not plan on releasing a written specification alone, but is to be released with a set of testing tools and test suites to ensure compliance and to confirm the realistic requirements of the written documents.
Distributions of Linux have varying degrees of acceptance of the LSB project. While the presenters refused to answer direct questions about whether or not any distributions were hampering the development, they suggested that some distributions who believed they were already in a monopolistic position within Linux could possibly be less interested in support a project that levelled the Linux distribution playing field. The LSB is expected to release version 2.0 within weeks.
Originally posted to Linux.com 2004-07-24; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 16:40 on
July 24, 2004
Linux symposium examines technicalities of upcoming Perl 6
OTTAWA Day 2 of the fourday Linux symposium here was a highly technical one. It began with Rik van Riel of kernelnewbies.org and Red Hat and a host of other members of the CKRM kernel resource management project explaining how it works.
CKRM, Riel said, is all about making more efficient resource management. When processes are competing for memory and processing time, the CKRM kernel patch improves the efficiency and allows all processes an equal chance at the CPU and RAM.
The particular task of CKRM is to prevent any one user or process from making the system so bogged down that other processes are not able to function. As an example, he said, with CKRM, should a Web site on a server be the recipient of a 'slashdotting', the kernel using this patch would ensure that other Web sites and services on the server would still have enough resources to function even while that one is no longer able to.
Protecting virtual machines from each other
The way it all works is that users and tasks can be assigned a level of priority, and a minimum amount of resources that they are guaranteed. There is the the whole system which is subdivided hierarchically into subclasses. Each is allowed a portion of its parents' resources, so it is not possible to assign more resources than are available.
One of the major uses, explained Riel, of CKRM is protecting virtual machines, discussed later in this article, from each other. If one virtual machine uses too many of the system's resources, it won't affect other virtual machines running on the same physical machine simply because CKRM won't allow it both will have minimum service levels and be limited so as to allow each other to function.
CKRM also allows IP addresses and ports to be managed the same way on a server. Should one port be receiving so much traffic that it is preventing another port's daemon from receiving any traffic, the kernel will step in and assign the minimum resources necessary to allow both to receive traffic.
The whole notion of CKRM is supported, Riel said, by Linux kernel leader Linus Torvalds, but large parts of it need changing.
CKRM has very low overhead above the existing kernel scheduler, he explained. In other words, doing this additional protection for processes on a Linux system does not significantly slow down the server.
CKRM will not prevent a process from using all the system's resources unless and until another process needs them, so the performance hit from it to individual processes is not significant, either.
Riel said the future of CKRM is setting it up for the priorities to be visible to users, and not just the administrator.
Chris Wright of Open Source Development Labs started off the second session of the day discussing Linux virtualization the setting up and use of virtual machines within Linux.
Wright told us that virtualization can be defined as compatible timesharing systems. Timesharing, he told us, was developed by IBM in the interest of splitting processing time between multiple projects on what were then extremely expensive computers.
IBM's history on this subject
In 1964, IBM opted for batch processing rather than timesharing, much to the ire of university users. By 1965, IBM had gone back to the timesharing philosophy of multiple uses of one computer. In 1972, IBM introduced system 370 with address relocation hardware.
Virtualization, Wright explained, is an abstraction layer which manages mappings from virtual to physical or real resources. It allows a program to believe it is really running directly on the hardware; in other words, even though it is only talking to another program a kernel which is talking to the hardware on its behalf without its knowledge.
Virtualization, or virtual machines, have a variety of uses, he said. Virtual machines can be used for resource isolation for the purposes of security, sandboxing, or honeypots.
The fundamental concept of virtual machines is that an operating system can run inside another, and thus provide an isolated environment. The operating system running inside the other cannot see the parent operating system and believes that it is actually the parent. Thus if it crashes, is compromised, or has any of a variety of problems, it can die without affecting the host computer. You can run several of these on the same computer providing many "computers" for people to use with only one real one. Usermode Linux is the best known such system and is often used by colocation providers to provide virtual servers to customers at a very low hardware cost, because the customers do not each need their own physical machines.
The goal of virtualization is complete virtual machines with their own hostname, IP addresses, /proc file system the directory unices use to store information about processes running on the system and so forth. A true virtual machine, he said, is not aware that it is virtual. The computer the virtual machine's operating system is running on is not a computer but, in fact, another operating system.
Wright said that a good test of virtual machines is to run them in inside each other. If they survive that, the virtualization is truly real, though such a system would run too inefficiently to be of any real use.
After lunch, Damian Conway of Linux Australia gave a presentation on what is new in Perl version 6.
The current version of the Perl scripting language, very common on the Internet including this Web site is 5. Perl 6 has been in development, Conway said, for four years now, and is expected to be released in about two more years, or mid2006.
Conway described Perl 5 as the test version of Perl, and Perl 6 will be the real version. They are taking years to develop it, he said, to make sure they get it right. Its developers have learned what works well and what doesn't, what is intuitive, and what is counterintuitive. Perl 6's intention is to fix it all.
Perl goal: Wider adoption of unicode
Perl 6 supports unicode characters natively within the code. Conway told the audience that one of the goals of Perl 6 is to bring in wider adoption of unicode.
Unicode is the multibyte character system that allows more languages than just Latin to fit within our character set. Instead of the 100odd characters to which we are currently limited, unicode allows for tens of thousands.
As a result, there are more characters available for Perl to use as instruction characters. The yen symbol, for example, Conway said, now represents the zippering together of two arrays; i.e., by using the yen character two arrays can be interlaced. One variable from each is considered, alternating back and forth. The changes he outlined for Perl 6 over Perl 5 are extensive but do not eliminate the feel of the code being Perl. He explained that Perl 6 is a needed improvement. It is an opportunity for past mistakes in Perl to be rectified, and the development team will take as long as possible to make sure Perl is done right this time.
He noted that there would not be much patience in the community for another major rewrite of the language. Perl 6, Conway said, must last 20 or 30 years.
One of the challenges facing the Perl development team is ensuring backward compatibility with Perl 5.
Conway said they considered it and that there was only one solution.
Perl 6 will not be backward compatible with Perl 5, Conway advised us. There is no way to do it and still implement the necessary improvements. Perl code will have to be updated to use the new language features and keywords.
Perl 6 is a cleaner version of the language with more logical conventions. All scalar variables, and only scalar variables, for example, says Conway, will be identified with the '$' sign, instead of various uses of various variables getting various identifiers like $, @, and %. Virtually every available special character has been used using the new version of Perl, including some in the unicode space outside of standard ASCII, though all such characters have keyword alternatives.
The new version of Perl include a try() and catch() routine for risky functions that operates differently from other languages that use it. The catch() is found within the try() in Perl 6 instead of after the try() as it would be in Java, Conway explained.
'Linux on Laptops' session
In the evening, there was a session led by Len Brown and Rusty Lynch on "Linux on Laptops." The session was an interactive session, affectionally known in the community as a "Birds of the Feather," or "BOF" session.
Brown started it off by discussing suspend/resume operations on Linux laptops. Other than laptops with S3 video cards, he said, most Linux laptops do not take well to being put to sleep, often failing to wake up afterward. Unfortunately, he said, solving the problem will likely require laptopspecific drivers for sleep mode.
Docking station support, it was discussed, is virtually nonexistent in Linux, and wireless nic support, though functional, has more work to be done to support especially newer cards like the 2200ipw, a brand new onboard wireless card that currently is not supported in Linux.
Power management support in Linux is also lacking. The topic was discussed at length, and it was agreed that simple measures such as reducing the brightness of the laptop's LCD can go a long ways toward preserving battery life.
Originally posted to Linux.com 2004-07-23; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 16:37 on
July 23, 2004
Ottawa Linux symposium offers insight into kernel changes
OTTAWA The Ottawa Linux Symposium is an annual limitedattendance conference in the heart of the Canadian capital. Linux developers from all over the world descend on the Ottawa Congress Centre for four days and discuss various aspects of Linux and alcohol consumption. The first day of the conference featured presentations on various topics, from running Linux under Windows and new versions of the NFS protocol, to PGP, X, Satellites, and publishing.
Dan Aloni started the first presentation in Room B on the subject of a project called Cooperative Linux, a project similar to usermode Linux (UML) except designed to run a Linux kernel on top of Windows as well as within Linux.
The current project, only at version 0.6.5, is based on the Linux 2.6.7 kernel. It is a 135kb patch to the kernel source, though it can also run on top of the NT kernel for Windows 2000, allowing it to function as an unemulated Linux virtual machine on top of Window.
Real and cooperative kernels
The actual changes to the Linux kernel are, said Aloni, minimal. With the patch applied it is a compiletime definition to select whether to build the kernel as a real kernel or as a cooperative kernel.
Real and cooperative kernels While we did not see a demonstration under Windows, Aloni showed us cooperative Linux colinux for short under X on his laptop. Linux booted inside a Window on the screen and was, for all intents and purposes, a separate Linux system.
Aloni explained that the way colinux worked, it could not talk directly to hardware. Anything it needs to do with hardware it has to ask the parent kernel to do. He told us colinux runs in "Ring 0," in other words there is no security on the part of the parent kernel to protect the computer from colinux it has free access to do whatever it needs to do. It also means that if colinux crashes, there is a good chance of taking down the whole computer, and not just the virtual machine.
The colinux virtual system and the host operating system are able to communicate using simulated network interfaces, and an unlimited number of instances of colinux can be run at any time until all available RAM is used.
The second presentation in Room B was canceled, but the Room C presentation was about NFS version 4, so I moved over to there.
J. Bruce Fields began his presentation on NFSv4 by telling us that most NFS implementations use either version 2 or version 3. Version 4, he told us, has been under development at the University of Michigan since around 1999 or 2000.
NFS version 4 is not based on earlier NFS versions, he said, but is written completely from scratch. The U of Michigan's implementation of NFS version 4 is nearly complete, lacking complete server side reboot recovery.
What does NFS really stand for?
NFS has often been called "No File Security" he mentioned, but NFS version 4 solves many security issues.
Using public key security and Kerberos authentication, NFS version 4 solves the problems of files being transferred in plain text, and the lack of proper verification of users.
At the start of the afternoon, Keith Packard of HP's Cambridge Research Labs discussed the problems facing the X project in its quest to speak to hardware through the kernel instead of directly.
When X was started, Packard told the standingroomonly audience of hundreds of people, it ran on top of closed source operating systems that did not prevent userlevel applications from communicating directly with hardware. As a result, the only way X could be run was to talk directly to the video hardware. By the time Linux came around, that was a firmly entrenched way of doing business.
The problem with this setup is that if ever X crashes, because it is communicating directly with the hardware, Linux cannot regain control of the hardware and the entire system locks up. Further, X manipulates memory directly and there is always a risk that X and Linux will not have the same idea of what is where in the memory.
Hotplugging is a term used to describe any piece of hardware that can be added or removed while the computer is on.
Packard told us that support for hotplugging of monitors is not yet supported properly. A lot of code, he told the audience, "knows" that the monitor does not change after X has been started.
A lot of code, he said, needs to be moved out of X and into the kernel. His parting thought was that perhaps the console should be abolished altogether and the entire Linux kernel should run in X which got Alan Cox to heckle him that he would be lynched if he tried.
PGP: Pretty Good Privacy
Dan York introduced a small group to PGP. PGP is Pretty Good Privacy, which is a file and email encryption and authentication system. GNU Privacy Guard is one implementation of it.
He explained how to set up a key "gpg genkey" at the command line, using GNU privacy guard, and how to use it, sign keys, and send them around.
An important part of the PGP system is the web trust, he told the audience. In order to verify a key, it requires more than just the program identifying the key as coming from who claimed to have sent it. That person should be verified by having met someone who has met someone who has met someone.. who has met that person. The web of trust requires PGP users to sign each others' keys and state a level of trust.
When users have met eachother and signed eachothers' keys, there is a clear path of people who have met each other, and the identity of the person sending the key or the keysigned email or file can be verified.
In the afternoon, former Debian Project Leader Bdale Garbee and Hugh Blemings held a session on ham radios, the IRLP, and the Amateur Satellite (AmSat) project.
Billed as a birds of a feather (BOF), it was actually a presentation. Hugh demonstrated the Linuxbased Internet Radio Linking Project by having a conversation with a person in New Zealand talking from his car.
Amateur satellite project on way to Mars?
Garbee talked about the Amateur Satellite project, which has now successfully launched 51 satellites into space. Currently one is in the works that would be the first amateur satellite to travel all the way to Mars, with the goal being to get into Martian orbit.
In 1957, he said, Russia launched the Sputnik spacecraft, launching the space race. Just four years later, ham radio operators launched the first amateur satellite, known as OSCAR. OSCAR7 was launched in 1974 and ceased to function from 1981 to 2002 when it inexplicably resumed working, making it the oldest amateur satellite still in functional orbit at 30 years.
AmSat launched a science experiment on one of its satellites to determine the quality of GPS signal above the altitude at which the GPS satellites orbit. It discovered that its was able to read signals from the other side of the planet from satellites that were not intending to send signals in their direction. The U.S. Air Force was not willing to help them find this information as the maximum altitude at which GPS signal works is classified.
Naturally, Garbee said, it was AmSat's duty to publish its findings about the GPS signals.
The day was wrapped up with a brief presentation from AMD, a major corporate sponsor of the conference, and a lengthy and very good presentation from Jim Munroe on the similarities between the war between free software and proprietary software and the war between authors, and major publishers and media consolidation.
An engineer from AMD made a point of thanking the people in the room, the Linux community, for helping bring AMD's 64bit desktop architecture to the main stream.
Scifi satire books
He introduced Jim Munroe, author of "Flyboy Action Figure Comes With Gas Mask" and founder of the organization No Media Kings. He discussed his science fiction books which satirize corporations, marketing, and science fiction's tendency to predict the future.
He spoke out against companies' consolidation. Consolidation, he cautioned, leads to monoculture.
Monoculture, he said, leads to problems like the virus problems afflicting Windows.
At the end of Munroe's talk, AMD held a doorprize raffle where it offered three AMD CPUs and an Athlon 3700based system worth over $3,000. Notably, one of the AMD CPUs was won by Intel's representative at the conference.
Originally posted to Linux.com 2004-07-22; reposted here 2019-11-24.
words - whole entry and permanent link. Posted at 16:32 on
July 22, 2004
Creative Commons highlights final day of OS conference
TORONTO The third day of the KMDI Open Source conference at the University of Toronto produced no clashes between open source and proprietary advocates and started with the only split session of the threeday event. In one room, a discussion took place on open source in medicine. In the other, the discussion focused around open source and open content in education.
Neeru Paharia of Creative Commons led off the day with a flashy presentation. Creative Commons, she explained, is an alternative to copyright that allows creators to quickly assign various conditions to their works and register them online.
Creative Commons provides countryspecific copyright licenses for creators which outline base rules for whether or not anyone can copy, creative deriviate works from, or use for commercial purposes the work in question. Using it, a work whether it be writing, a movie, poetry, music, any form of articistic or written creation can be anything from fully protected under normal copyright rules to being effectively in the public domain.
Creative Commons: Alternative protection
At the launch of the Creative Commons consortium, videos were sent in by John Perry Barlow of the Electronic Frontier Foundation and Jack Valenti of the Movie Producers Association of America endorsing its creation. Valenti acknowledged that not everyone wants to release their artwork under standard copyright rules, and that it was a good idea to allow people to have alternative ways to release artwork with different sets of restrictions.
Guylaine Beaudry of �rudit, a project based jointly at the Universit� du Qu�bec � Montr�al, the Universit� de Montr�al, and the Universit� de Laval three Frenchlanguage universities helps to have scientific research get published in nonprofit journals. In so doing, �rudit saves research and educational institutions thousands of dollars a year and keeps research and results in the hands of non profit groups rather than companies whose interests may not necessarily be with the public good.
Subscriptions to nonprofit journals, Beaudry said, cost about onethird of commercial journals. Erudit is a public service infrastructure within the Unix community whose goal is the promotion and dissemination of research outcomes. The organization's first objective is to get research journals up on line as soon as they can.
Most schools and students don't care about open source, she said, but �rudit does. Seventy percent of their process is open source, and 100 percent of it is open formats, though she warned that open source can be like proprietary software should it be lacking good docmentation. For the most part, though, she said using open source leaves more money in �rudit's budget than proprietary software. Even if open source costs more at the outset to get set up, it is still worth it.
She said that science journals need to be considered a part of the public good. Commercial scientific journals do not have the best interests of the scientific community and the public at heart, she said, and should be stopped and replaced with the open model of scientific journals her project provides.
After lunch, Ronald Baeker, chair of the conference and a professor at the University of Toronto, led off the final afternoon panel discussion before the closing keynote address with Claude Gagn�, policy advisor to the government of Canada's Department of Industry, Thomas Goetz of Wired magazine, Joseph Potvin of Public Works and Government Services Canada, and Mark Surman of the Commons Group.
Canada 'needs open source debate'
The five of them gave brief introductions and the floor was opened to general discussion with the physical and online audience. Baeker started by identifying the issues we are facing as issues of power, community, control, and trust.
Gagn� gave her perspective that Canadians need to be more aware of and should better understand open source. She said she agreed with Bob Young's assessment that Canada needs to have an open sourcestyle debate on public policy over patents and copyrights, and she said that the Canadian government is having a debate on whether code written within the government should be released as open source code.
Goetz told the audience that he is explicitly not a lawyer, nor a programmer, nor a Canadian. Open source, he said, is not just a means to commodify existing proprietary software but a means to new ends. It allows and encourages progress.
Potvin told us the government of Canada is ours at least for the Canadians in the room and not just in elections, but all the time.
Surman said that the most important question for us to answer is what does open source mean and do for Canada. He also asked if open source is a civil rights movement. Canada's socialeconomic background means that Canada needs more open source, and, he said, open source needs more Canada.
The first questioner from the audience asked the panelists where they thought open source technology would take us over the next 30 years. Would we see a Boeing 747level in complexity coming from open source? Also, he asked, with intellectual property volume increasing every year, is copyrighted property sustainable?
Baeker stated that how far open source software can go is an empirical question. More research, he said, needs doing on how to make open source projects more successful, referring to the large volume of projects on SourceForge which are abandoned or effectively dead.
Potvin picked up from there, saying that failed open source projects are no more failures than scientific experiments. The projects that have been orphaned or abandonned are a stage toward something else, a new project.
Gagne said the Internet is a Pandora's box that has been opened and is impossible to stop, with so much activity, new phenomena, and no clear legal answers.
Goetz said that the SourceForge phenomenon is that of developers scratching their own itch, but that in many cases not enough people share an itch for projects to survive. Failed projects at SourceForge, he went on, are not unlike diseases abandoned by pharmaceutical companies which have found that solving a specific disease would not bring in enough revenue to make up for the R&D costs of solving it.
At this point, an attendee announced: "I am one of the failures on SourceForge. My program got knifed, not forked."
Potvin told the audience that when a project or program starts receiving complaints it is a time to rejoice, because it means people are using it.
Internet a Pandora's box?
A commentor suggested that the Internet can, in fact, be put back in the Pandora's box. Media companies, he said, are buying up Internet infrastructure companies and are taking control of the Internet. If Disney can block Michael Moore's latest movie, "Farenheit 9/11," from coming out, what's stopping them from a few years from now being able to block some information from being distributed on the Internet at all, through infrastructure rather than legal means? Goetz responded by asking if media ownership or government regulation would be better.
Steve Mann of the Department of Electrical and Computer Engineering at the University of Toronto, known for his work in wearable computers, gave the conference's closing keynote address.
With a dry wit to his speech, Mann, who invented the wearable computer as a Ph.D project more than two decades ago, told us that he does not see what we call open source or free software as those terms, but rather that it is free source with a required 'WARE' on the end.
Richard M. Stallman, he said, wrote that he believes everyone has a write to read. Mann said his belief is that everyone has a right to think.
As our society gets to the point where we are implanting computers in our bodies, it will become possible, he warned, to make having certain thoughts enforceably illegal.
As he spoke, the overhead projector previously used for presenter's PowerPoint presentations was serving as a display of what he could see through glasses with a builtin camera and mild image processing.
Instead of using presentation software of any sort, he had a small white notepad on which he wrote with a black marker. As he wrote, we could see it from his perspective on the overhead and the need for any form of slides for his writings was eliminated.
The camera/glasses set up he was wearing, he told us, is capable of filtering advertising in the street such as billboards and other streetside ads. He said there are a couple of Ph.D students working full time on that problem.
Using wearable computers and cameras mounted on the person, he said, privacy can actually be improved.
The value of wearable cameras
Surveillance, he noted, translates from French as "oversight." Sousveillance, he said, means "undersight," and is the term he uses to refer to the camera he wears. He believes that should everyone wear these cameras, privacy can be improved because the need for surveillance cameras to exist would no longer be there. With wearable cameras any time there is more than one person, it is assumed there is no privacy, and when you are alone, it is assumed there is privacy.
He points to camera attachments for modern cell phones as an example of why he believes it is a matter of when, not if, we get to this point in society.
Mann's take on the GPL and on proprietary software is that there is a spectrum from copyleft to copyright.
At copycenter, he said, is the public domain.
Pictures, he said, should belong to the person whose likeness they are of, regardless of who took them. To that end, he has written the Humanistic Property License Agreement to provide limited rights to use a picture taken of someone under the license.
Mann's research is expensive, pointed out an audience member, and he came across as not particularly procorporate during his presentation, so where does he get his funding for his research? His funding comes from doing lucrative corporate speeches, consulting work, and donations, he said.
The conference as a whole functioned well. Its only fault was that the sessions routinely went over their alotted time. The selection of speakers and presenters, while I couldn't cover all of them in my summaries of the conference, were all wellselected and interesting. The conference was well worth attending to nearly anyone interested in any aspect of open source, free software, free source, libre software, or whatever else you'd like to call it.
The entire conference will be posted in about a month on the Internet as downloadable video files, along with all the PowerPoint presentations that were submitted and other information about the conference.
Originally posted to Linux.com 2004-05-12; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 02:29 on
May 12, 2004
Red Hat, Microsoft clash at open source conference
TORONTO Day 2 of the KMDI Open Source conference started with Robert F. Young, cofounder of Red Hat, Inc. presenting a positive view about open source business models. At the end of a conference day lasting nearly 13 hours, Young returned to deliver the conference's keynote address. In between, Jason Matusow of Microsoft's Shared Source Initiative gave attendees a very different point of view about the value of open source in the enterprise. No real surprises there.
A company can have the world's best business plan, Young said, but still not be completely successful. It isn't simply the business plan that makes a business work; it's talking to customers and running the business that makes the plan produce results.
Young's basic business plan when he started Red Hat was to find a way to pay his rent after being laid off from a failing company.
The key: Always listen to customers
Companies, he said, that start with a perfect business plan don't necessarily succeed. The business plan can cause missed opportunities. The key is not the business plan or model that the company is planning to use but simply listening to what customers want. What your customers want is more important and more relevant than what the venture capital investors want.
His initial idea was to start an allUnix and related systems book store. He went out and asked potential customers what it was they wanted, and their answer was categorically this: to find out what this free software thing they were hearing about was.
Red Hat created a business based around distributing Linux, and pretty soon it was making seven figures a year.
As the company grew and there was money to be invested, he started attending conferences of Unix vendors. At one such conference in 1995, Scott McNealy, CEO of Sun Microsystems said in a keynote address free software could have no P&L (profit and loss) because it had no P. That made Young happy, he said, because free software was already getting attention at that early stage.
Their customers, Young said, told them not to go proprietary with their software. The great strength of Red Hat to many customers was that the customers could fix the software if it was broken, because source is included, unlike existing proprietary software models.
Red Hat does not sell a product, he told the audience. Red Hat sells control of a product. People are willing to pay for Red Hat even though they know they can get the same product for free because of that.
Like so many analogies in the computer industry, Young compared it to the car industry. Proprietary software, he said, is like an auto manufacturer selling a car with a locked hood that only the manufacturer can open again. Open source allows people to see what's under the hood, even though not everyone will ever actually fix anything.
Red Hat's loss leader
A member of the audience asked if Red Hat loses any money from the $2 Red Hat CD sales that take place around the world.
A lot, said Young, but gracefully. It isn't a problem to the company, because in the long term it's still good for the company, and it's good for Linux as a whole. It allows Red Hat to get to markets that it would otherwise have trouble reaching, he said, by allowing people to sell it in places where the company doesn't market. Those markets would otherwise be essentially cut off from Red Hat if the practice was not allowed.
Besides, as people who buy the $2 version of Red Hat move on, many may get to a point where they can afford the official boxed set, so really they don't lose. It becomes something of a loss leader.
The attendee with the view most contrary to the majority of conference attendees was Microsoft's Matusow.
Matusow said that there is no correct way to distribute software. In fact, he told us, the GPL is itself a proprietary license as are all software licenses, because at the root is the assertion that someone owns it and can give someone else permission to use it.
Linux, he said, is growing largely because of the endorsement of large corporations such as IBM and Novell, which have at their heart their own corporate interests which happen, for the moment, to coincide with Linux. With about 60,000 software companies around the world, it is not just a matter of Linux versus Microsoft. There are a lot of open source and a lot of proprietary software companies.
He addressed Young's assertion that listening to customers is the most important part of a business plan.
Of course it is, he said, that's why 80 percent to 90 percent of the features in Microsoft Office are customerrequested features.
Frequent upgrades impossible?
The open source philosophy of "release early, release often" does not work in large corporate settings, he said. Getting a release cycle every two years or 18 months is hard enough; frequent upgrades are simply impossible, he added.
Matusow went on to make a point about Red Hat's corporate Linux licensing, saying that Red Hat has a perCPU licensing scheme with an auditing clause in the contract, and that client companies could not modify the (GPL'd) code for risk mitigation reasons on Red Hat's part.
He commented that some Microsoft executives' early comments about open source were unfortunate, considering Microsoft itself uses open source in some situations. BSD code, he said, is in Windows, and BSD is used at Microsoft's hotmail web mail service.
Microsoft's Shared Source iniative is sonamed because Microsoft wanted to avoid using the term "open" for anything, lest it open too large a can of worms. It did anyway, he noted. Eric S. Raymond, for one, finds the term and concept offensive.
Microsoft's Shared Source is a framework, not a license, he told us. Governments, corporations, educational institutions and others get a readonly license for Microsoft code. Showing the source code is a source of trust, he said. Less than 5 percent of users with access to the source will ever actually modify anything, he said.
Open source, he said, is a pejorative term. The Linux Standard Base, he told us, is necessary for independant software vendors to build on top of Linux.
In the questionandanswer section following his presentation, a member of the audience asked when Microsoft would allow users to save as Open Office XML files, an official standard document format. He avoided the question rather expertly.
Microsoft needed the SCO license
He was next asked about Microsoft's relationship to the SCO lawsuit. He said that Microsoft needed the license from SCO and that the amount paid for that license would not cover the legal costs of the company's suits.
He finished with the statement: "[Microsoft] will continue to compete based on product merit."
After a reception and break, Young took the floor again, this time to deliver an energetic hourlong keynote address.
Young told of his educational background. He acquired a General Arts honors degree from the University of Toronto, the site of the conference, around 30 years ago. He said he was never a very good student, and following the university, he became a typewriter salesman after discovering that he needed several more years of education at institutes he couldn't get into with his grades. He wanted to become a lawyer.
Like many soso students, he spent much of his homework time in the library reading about any topic possible not related to his homework. There, he learned about the thinker Adam Smith, whose view of the world he explained. Fundamentally, it is that people who go out to make their own lives better can make the world a better place in the process more easily than the world's most benevolent king.
Corporations, he told us, work well when they are small. They are generally responsive and innovative, but as they grow, they lose it. They become dysfunctional, stop innovating, and stop talking to their customers as they take over a larger and larger share of their market until they're a monopoly with no idea what's going on and increasingly high prices.
When the 'umbrella' effect kicks in
At that point, he said, the umbrella effect kicks in, where high prices and lack of attention allow smaller companies to enter the corporate lifecycle themselves.
After some discussion of Red Hat's corporate evolution and his own personal wealth, he spent a fiew minutes giving an overview of copyright law history primarily in the U.S.
In 1950, he said, copyright lasted for 20 years after the creation of a work the same as a patent. Now, 54 years later, the life of a copyright is 75 years after the death of the person or company that created the work.
Up until 1976, in order for a work to be copyrighted, it had to display the familiar "(c)" on the work. In 1976, that requirement was eliminated. All works were assumed to be copyrighted, and the public domain ended in the U.S. in a penstroke. Since 1976, he told us, it is assumed if that a work exists, someone owns it. And therefore that if you use that copyrighted work you could be sued at pretty much any time in the future.
The GPL, and Richard M. Stallman, who he called a true visionary, replaced the public domain. The license filled the void in copyright law created by the elimination of the public domain.
Patents, he said, stopped having to be specifically for inventions when the US PTO Patent and Trademark Office ran out of space to store all the inventions that had been delivered to them as part of the patenting process.
Got the money? Get a patent
Patents last for only 20 years. What can be patented has over the years devolved from a physical invention to an idea or concept. Anything, he said, can now be patented if you have the money.
He created the Center for the Public Domain with a $20 million budget and a mission to increase public awareness and debate on the issues surrounding copyright and copyright law.
Software patents are stupid, he said. When the USPTO surveyed software companies to find out if software patents should be implemented in the 1980s, the answer was a unanimous no. Lawyers and willing academics, he told the audience, managed to bring software patents into reality.
Intellectual property law, he said, is being written now not in actual, highly debated laws, but in hastily negotiated international treaties. Governments around the world are creating their IP law based on treaties signed among each other, which does not come up for public debate the way a regular law would.
The result is that many countries around the world are finding themselves with U.S. corporateinterest intellectual property laws.
Originally posted to Linux.com 2004-05-11; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 02:23 on
May 11, 2004
OS conference endures PowerPoint requirement on Day 1
TORONTO The University of Toronto's Knowledge Media Design Institute Open Source conference opened Sunday with a threehour session on Free and Open Source Software as a social movement, with Brian Behlendorf of the Apache Project leading it off.
Behlendorf, Rishab Aiyer Ghosh of the FLOSS project, and Eben Moglen, legal counsel for the Free Software Foundation, discussed their views and experiences on the social movement that we know as the F/OSS movement.
During the period, each speaker gave his remarks and then took questions, and at the end all three took questions together.
PowerPoint was required
Behlendorf led off with a comment that he is not used to PowerPoint the presentation software of choice for the conference, which is running Windows XP and apologized in advance if the PowerPoint requirement caused him to slip up, because he said he is used to the OpenOffice.org variant of the software.
From the early adoption of the Internet came two things: open standards through the Internet Engineering Task Force (the IETF), and the fundamental idea of a decentralized network.
In the model of the Internet we use, anyone or anything can connect to it, and if packets are sent, they are routed. If a particular node on the Internet is causing problems it is up to other nodes to ignore it.
The same basic concept can be applied to both democracy and the free software movement as a whole.
In a free software project, Behlendorf argued, anyone can contribute regardless of the letters or lack thereof after their name or their life experience. If what they say is productive, they'll be listened to, and if not, they'll be ignored.
The IETF, he said, has a basic philosophy for applying standards. It is not necessarily when everyone sitting around agrees that the standard makes sense, but when two independent implementations of a proposed standard are made and are capable of interoperating with each other that a standard is born.
Behlendorf's specific experience comes from his background with the Apache project. Apache, he told the audience, was founded on top of the NCSA Web server code which was licensed as what amounted to public domain with credit. Eight developers wanted to combine their patches for the NCSA Web server together; thus Apache's name (apache = A Patchy Web server). More to the point, part of the motivation for Apache's creation was the desire to ensure the IETF's HTTP standard was maintained and not altered or made irrelevant by a single company's control of both the dominant server and the dominant client at that time, Netscape.
No forking the code
Apache's developers wanted to keep the project free no cost, and the right to fork the code were critical to their aims.
As a new way of seeing it, Behlendorf suggested that students will start to see open source as a massive "Global Software University." He suggested that more companies will use "You can write open source code here" as a recruiting and retaining tool, and he believes nontechnological causes will use an increasing amount of open source, all resulting in expanded use and adoption.
Rishab took over when Behlendorf was done and started his presentation on the FLOSS project. FLOSS refers to the term Free/Libre and Open Source Software, where the Libre attempts to address the English term "Free" which in many language is split into two words roughly, free (beer), and liberty (libre). The FLOSS project ran a number of surveys of developers recently in an effort to better understand the motivations behind the work they do.
Of the approximately 2,800 European, 1,500 American, and 650 Japanese developers the project surveyed, 78 percent expect to gain from the sharing of knowledge, while 32 percent say they want their contribution respected. Among dozens of other statistics was the interesting number that 45 percent of the developers identified themselves as "Free Software" developers while only 27 percent identified themselves as "open source" developers.
He went on to give a relative comparison of costs of proprietary software in third world countries. In the U.S., he said, Windows XP and Microsoft Office cost together about $560. In the U.S., it amounts to a small amount of the country's per capita GDP i.e., how much the average person makes in a year.
In Brazil, however, as a comparative example, the average annual income in the country is $2,915, meaning it would cost about 2.5 months of all a person's income to buy Windows XP and Microsoft Office.
To translate that back into U.S. dollars relative to the annual income it would amount to over $6,000 for Americans to buy that software.
FLOSS, he said, helps developers gain skills in many areas, including coding, teamwork, team management, and copyright law. The result is that FLOSS development helps its developers get employment. Many developers join FLOSS development in order to improve their skills, further improving their employability, and most of them continue on to continue improving their skills.
Thirtysix percent of all companies that responded agreed somewhat to strongly with the statement that their employees could write open source software on company time. Sixteen percent of companies that were not in the technology field directly had that response.
HTML inherently open source
The Internet, he said, going down a new topic, took off when the people using it were not only programmers but artists and people in the general population who began sharing their content on the Internet. In order to do it, they looked at existing Web pages, modified them, and posted their own. HTML became an incidental expression of widespread open source it is an inherently open source language.
Open source allows countries to inexpensively train its citizens in technology and technology development, ultimately allowing them to escape from the model of vendorprovided black box software.
As an example, he cited the Spanish province of Extremadura, the poorest region in Spain, lacking any decent transport infrastructure and being a largely agricultural region.
With the European Union's consolidation and the liberalization of its telecommunications laws, the possibility of universal access to such basic services as telephones being lost became real and it became necessary to bring the region into the modern world in short order.
A community Internet access centre was set up in every village with 2mbps speeds. Libraries were there or set up in every village and a goal of a computer for every two students was set and achieved with the help of 70,000 Line Spain's inhouse distribution of Linux computers.
Digital literacy training was offered to everyone from housewives to the retired and unemployed. About 78,000 people have been trained up to date.
Asked if offshoring was a threat to open source, he responded that no, offshoring is to save money. If you're not paying people, you will be not paying them just as much here as in India.
Moglen spoke next. He is a charismatic speaker with a lot to say, but as another speaker later pointed out, some of what he says is rhetoric.
Speaking on behalf of the conspiracy He started off by announcing that he was there to speak "on behalf of the conspiracy." The history of freedom, he said, is a history of conflict between those who want freedom and those who benefit by depriving them of their freedom.
In order for freedom to prevail, we have to, as a society, get passed being willing to accept the deprivation of freedom for profit.
Richard M. Stallman started his movement, Eben said, in 1982. Twentytwo years later, his movement is still strong and getting stronger. By contrast, 22 years after the French revolution, Napoleon was attacking France's neighbors to impose their newfound freedoms on the other countries of Europe much in the way, he commented, that the U.S. is bringing freedom to the people of Iraq.
With software being free, and hardware being very cheap, Eben identified the next big battle for freedom as being bandwidth. The right, he said, to trade data is the next battleground, and it will involve opening wireless frequencies up for use in ISPindependent networks and so forth.
Appliances that depend on Free Software are meant to include the code. The responsibilities of appliance manufacturers will be more clearly laid out in the next version of the General Public License, version 3, upon which work is starting.
GPL version 2 was released in 1991 by Stallman rather unilaterally. Eben described that as what was appropriate for the time but said future release of the GPL such as the inprogress version 3 will not be done the same way. An open process is to be established to write and adopt GPL 3.
He closed by describing the Free Software movement as being irrelevant to business, but as being a fight for civil liberties as they relate to software.
The afternoon session saw a series of speakers with discussions of varying lengths speak about the legal and political issues of open source.
Empowering the mission through knowledge
Graham Todd led off the afternoon with a discussion of the Canadian Government's International Development Research Centre, a Crown Corporation whose mission is not aid specifically, but research in Third World countries. The mission is empowerment through knowledge. He told us that his goal in attending the conference was not to speak about open source but to learn from the people there what he can take back to IDRC Canada and use in their research.
David McGowan of the University of Minnesota Law School went next declaring in words borrowed from an essay from Eben Moglen himself as a copyright and IP droid, neutral in the battle between open and proprietary software. His interest is in software that works for what he wants to do, and whoever provides it is fine with him.
To call free software freedom, he said, must mean that licensing is slavery. Free software, he said, is full of rhetoric. He cited Moglen's speech's tone as much as its content in this assertion.
SCO, he said, is making ridiculous claims from a legal perspective. From the GPL's unconstitutionality to its being preempted by U.S. laws, the facts just don't carry it. It is rhetoric on SCO's part designed to be no more than just that rhetoric. And it is no more rhetorical than the open source and Free Software side's rhetoric in the ongoing battle between them and SCO.
What is the GPL, he asked pointedly. The GPL is, he answered, unilateral permission to use code. It is not a contract.
The explicit lack of a warranty, among other things, provides a condition on the permission, and from there the GPL's interpretation in various legal jurisdictions simply becomes fuzzy.
Is the GPL terminable? Not really, but if someone with rights pulls out of software development that is under the GPL, that can cause problems.
Is the GPL a license or a trademark? He pointed out that while the GPL is written as and presented as a license; it is used more as a trade or certification mark. As an example, he said that 72 percent of all projects on SourceForge.net are released under the GPL. Did every one of those projects think through the decision to use the GPL? Or did some of them use it because it is a qualifier as free software, not for the specific terms of the license. Is it, he asked, a license, or is it a manifesto?
Whose GPL is it, anyway?
Whose GPL is it? To all appearances, he pointed out, the GPL is a creature of and a part of the Free Software Foundation. Is it everyone's or FSF's? The answer is resoundingly the FSF's.
How many GPLs are there? He pointed out that the GPL applies to anyone modifying or distributing the GPL, and that the GPL must be passed on. To users just using, the GPL is essentially irrelevant. As such, there are two classes of GPL licensees.
Food for thought, anyway.
Barry Sookman then took over. He's from the Technology, Communications, and Intellectual Property Group, McCarthy and T�treault. His take on the GPL is that it doesn't behave the same in every country.
In the U.S., copyright is right in the constitution, whereas in, for example, Canadian law, the whole philosophy of copyright is different. The only purpose of copyright in Canada is to benefit all forms of authors and creators of art. According to a Privy Council (essentially the Supreme Court of the Commonwealth) ruling in 1923, copyright law stems from the Bible's 8th commandment: Thou Shalt Not Steal.
In deciding what to protect under copyright law, the basic principal that anything that is worth copying is worth protecting is used.
Without going into too many details, his argument is that with the different histories and interpretations and case laws in various countries, the legal interpretation of the GPL can vary wildly and it will apply differently depending on the country.
Broader participation needed
Drazen Pantic of OpenNet, Location One, and NYU Center for War, Peace & News Media, compared the nature of proprietary software to broadcastonly media. Neither, he said, is inherently trustworthy, yet both depend on being trusted. For progress to be made and freedoms to be gained, there needs to be a broader participation the user of software and the reader or viewer of news must become a part of the development or broadcast.
With opportunities to comment at the end, Moglen pointed out that a concern brought up about what happens to GPLed software after it becomes part of the public domain was irrelevant for all practical purposes as under current US law copyrighted works become part of the public domain 70 years after the death of the individual or company that created it.
Ultimately, the first day of the conference worked out well. All the speakers spoke in one large room and the sessions were long enough that there did not need to be a feeling of urgency for one to end so people could get to the next one. I look forward to Day 2.
Originally posted to Linux.com 2004-05-10; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 02:10 on
May 10, 2004
Real World Linux 2004, Day 3: The conclusion
TORONTO The third and final day of the Real World Linux 2004 conference saw a number of valuable technical sessions and two interesting keynote speeches, one from IBM Canada president Ed Kilroy and the other from Linux International director Jon 'maddog' Hall.
IBM on 'Open for EBusiness'
Kilroy, IBM Canada's President, gave a keynote address entitled "Open for EBusiness on Demand: An Executive Perspective." Kilroy said that Linux has become mainstream and that IBM supports Linux because it brings real value to both IBM and to its customers.
Kilroy thanked customers who have remained with the company for a while, adding jokingly: "NonIBM customers, we'll get you on the way out."
After years of cutbacks and corporate trimming, companies are back to considering growth. Companies, Kilroy said, have become very good at cutting costs over the past few years, and they are looking to start moving forward again.
Eighty percent of chief executive officers are going into 2004 with growth on their minds, he said. Eighty percent of CEOs are concerned about agility in response times to their customers' needs, and 60% of companies need corporationwide transformation in the next 2 years, Kilroy said.
With this changed business environment, many companies are looking to revamp themselves and their systems to meet their customers' needs. Kilroy said it is time to push IBM's ondemand business model, which integrally involves Linux.
Ondemand, he explained, means a lot of things . A company that is ondemand is a company that can react quickly to rapidly changing circumstances.
Some of Kilroy's key points:
In order to be a company capable of business on demand, the company needs to have all its processes integrated from one end to the other.
The company needs to operate as a unit.
Ondemand business is one that can rapidly react to internal and external threats.
Kilroy gave an example. In 2003, IBM planned to have a meeting in Toronto, where 1,200 IBM Canada president Ed Kilroy gives his keynote speech.
IBM employees would gather to discuss various projects. When the conference was only a few days away, the SARS crisis hit the city of Toronto, putting scores of people in hospital and killing a number of victims. The external threat to IBM's internal conference forced the company to decide whether the conference was worth the SARS risk, or if it was best to cancel it.
The result and the part of it that falls under IBM's definition of ondemand was that the risk was neither taken, nor was the conference cancelled. IBM instead moved the entire conference into a twoday long series of Webcasts, eliminating the need for a good deal of human contact.
Companies cannot count on improving indefinitely. Eventually the way they are going will cease to be efficient or competitive. Companies must concentrate not on getting better, but, as he put it, getting different. Those businesses need to be willing and able to evolve, and their processes and infrastructure need to evolve together, in concert, to maintain the best value.
Because Linux is open, Kilroy said, is it a cornerstone of IBM's vision of ondemand business. It enables companies to choose platforms appropriate to the jobs they are intending to do. It is cost effective, and it is secure. Linux, he pointed out, is used in everything from the smallest embedded devices to laptops, to game consoles, and on up to the largest mainframes and supercomputing clusters.
IBM, Kilroy said, contributed $1 billion in value to Linux in the year 2000 and now has 7,000 employees around the world whose jobs are dedicated to it. He told us that Linux is used in missioncritical applications across the entire company.
Kilroy told us that IBM uses, excluding research and development (R&D) servers, more than 2,100 Linux servers across the company. Among the uses he listed for the Linux servers was:
- An intranet server for the company with 100,000 users, running on a zSeries system
- Security assessments
- Email and antivirus scanning
- Hosting services for clients what he termed ehosting and network management.
- IBM's Standard Software Installer (ISSI), a process that builds images for the company's 320,000 employees
- Microelectronics 300milimetre wafer manufacturing.
IBM's power technology processor
The 300mm wafer is for IBM's "power technology" processor, he said, and it is used in all sorts of applications, including Microsoft's XBox game console. The assembly line is fully automated, start to finish, with no human intervention. It is controlled entirely by Linux computers and has been running 25 months without any failures or outages.
He spent the next few minutes briefly outlining some Canadian companies whose Linux migrations had been done using IBM hardware. Among them, he said, Nova Scotia's Chronicle Herald print newspaper adopted Linux in 1997. The whole shop is running, he told us, on IBM blade servers. The newspaper benefits from lower maintenance and licensing costs for their systems.
Mark's Work Wharehouse, a clothing store, needed to update its aging inventory and pointofsale systems. The old system, he explained, was run independantly at each store, and at the end of the day a mainframe at headquarters would take all the information from all the stores and figure out how much inventory would be needed to be sent to each store based on the sales.
Instead of continuing down that path, their company set up Linux and Webbased point of sale terminals which maintained realtime information on the head office's computers. This resulted in a 30 percent reduction in total cost of system ownership for the company.
One of the more interesting points Kilroy made during his keynote was near the end, when he told of the U.S. Open Tennis tournament's Web site. It ran on Intel servers and peaked and troughed heavily over the playing season, depending on weather, who was playing, and so forth. To have a server capable of always handling the highest load that could occur, he said, would be uneconomical, because most of the time the server is not used at its highest level.
What they did, instead, was share a computer cluster with a cancer research group. As traffic to their site increased, more processor time was allocated to the Web site hosting and less to the cancer research. As the traffic tapered off, the computers returned to the task of researching cancer.
He emphasized that a switch to Linux can cut a company's costs and allow it to reinvest the savings into its business transformation and IT infrastructure. In essence, he said, an upgrade to Linux can pay for itself.
The moral of his presentation, according to his final slide, was the comment: "Don't fear the penguin." He also urged those present to take advantage of the information IBM offers free on its website about using Linux.
In the brief Q&A session that followed, a member of the audience asked if IBM's SmartSuite would be released under the GPL. A fellow IBM employee said that IBM would like to do so but that components of SmartSuite are licensed but not owned by IBM, so making it public may take some time.
Jon 'maddog' Hall speaks
Jon 'maddog' Hall opened his speech by announcing that his lawyers were reminding him to tell people that Linux is a registered trademark of Linus Torvalds. Companies or inviduals that want to use the Linux name should read up linuxmark.org for information about using the name and registering it before simply using it.
He then went on to quote Sir Isaac Newton: "If I have seen further, it is by standing on the shoulders of giants." Linux, he said, should fall under the term "cooperativeism," rather than the sometimes used "communism." He also explained that his interpretation of free software is that it is free to be helped, not necessarily free to be taken.
Hall rehashed a good deal of the things he told his smaller audience at Wednesday's "visionary" debate panel in more detail.
From 1943 to 1980, he said, all software was open source. Companies did not ship compiled software, they shipped souce code. If a company needed some software written, they contracted the work out, and, if it was not done properly, on time, or with adequate documentation, the developer would not be paid.
In 1969, Hall said was a university student. In that year, a software package called DECUS was written and donated to libraries. The idea behind the move was that if the person writing it had to write it anyway, why not let others use it? Others could use it for the cost of copying the papertape code.
Hall said that 1969 turned out to be an important year in software development for two other reasons: a) two scientists at Bell labs developed the first version of Unix, and b) Linus Torvalds was born.
From 1977 to 1980, Hall said, the price of hardware dropped. In 1980, shrinkwrapped software started to hit the market and has remained the dominant form of software distribution.
Hall offers a history lesson
In 1984, Richard Stallman, upset at software he could not modify, launched the GNU project, an effort to build an operating system that would have source code available and free. Stallman then went on to write emacs. Hall said some people think he should have stopped there instead of trying to write the GNU kernel, because emacs has enormous functionality and is sometimes jokingly referred to as an operating system.
RMS and the GNU project went on to write the gcc C compiler, libraries, and command interpreters to go on top of their upcoming operating system.
As the concept of available source code continued to take off through the '80s and '90s, sendmail, bind, postgres and other such major projects began to appear, using the model of free software that was beyond the scope of the GNU project.
In 1991, Finnish university student Linus Torvalds started a new operating system project to imitate Unix, because it was simply too expensive to acquire Unix for personal use. Torvalds, as anyone who knows their Linux history can tell you, started the project as nothing more than something to do for fun.
Unix is really a Linuxlike system
Hall said that Unix is a trademarked brand of XOpen; it is a certification mark. Because of that, it is incorrect to say that Linux is a Unixlike operating system, because it isn't unless XOpen says so.
Technically, he concluded, Unix must be a Linuxlike operating system, except that it is expensive and closed.
For people or companies who do not want to write code under the GNU's General Public License, he told the crowd, there are other options besides being closed source. Licenses such as the artistic and BSD licenses allow a different set of restrictions from the GPL.
He thanked the Free Software Foundation for all their hard work up to now on getting free software adopted.
Hall reminisced about days gone by when he could call up companies from which he had bought software for support and, rather than getting a dismissive menu system as now often happens, be able to quickly speak to someone like the company's president or CTO to get the support he needed.
Getting a reply to support requests, he said, are in the closed source world. By using closed source software, companies' business methods change. If the source is Jon 'maddog' Hall delivers his keynote address.
provided, a company can fix it and get on with its work rapidly, but with closed source, the company providing the software has to be depended upon to fix the problem. With open source software, the user or user company becomes a part of the solution.
Part of the reason for today's success of open source, he explained, is that hardware is very inexpensive.
Today you can buy a 3GHz processor, a 120GB hard drive, a ton of RAM, and a video card for just a few hundred dollars; this would have made military planners jealous just a few years ago. The price of software, he noted, has not dropped along with the price of hardware; the result is that Linux essentially fills a vacuum left by the desire for lower software prices relative to the low hardware prices.
He also said that proprietary software poses a problem to military and government organizations. Should a government or military organization trust a foreign company to provide unauditable software for use on their sensitive machines?
Why Linux is beneficial to international companies
Another factor for governments is the issue of economic trade deficits. For a country to pay large amounts of money to a company in another country makes little economic sense, if there is a way to pay people in their own country and provide employment (and tax revenue) back to themselves. Linux provides such a means. Because there is no dependence upon a specific company being in the home country, a government can hire its own citizens to deploy and support Linux.
Another problem with the closed source software model, Hall said, is the issue of native language support.
Countries and regions whose people do not necessarily speak one of the 50 languages Microsoft supports, for example, have no recourse to add their own native tongue to the software. With Linux, a company or a government can hire people to translate the software into their native language, or people can do so voluntarily without that effort causing any issues.
In the mid1990s, Drs. Thomas Sterling and Donald Becker developed the Beowulf cluster. The concept behind it was simply that lots of cheap, offtheshelf computers working together could replace the dying breed of large super computers. The result was a supercomputer for about 1/40th the cost, Hall said. Grid computing has taken that original concept to much greater heights here in the 21st century.
With a computer's instability, Hall said, if each person loses, say, $5 a day in productivity from crashes and bugs, that works out to about $2.5 billion per day in lost productivity in the entire computerusing world. If using more stable software gains each person $1 of unlost productivity, that is $500 million a day saved.
He noted that the number is based on an installed base of 500 million computers across 6.3 billion people.
He joked that 5.8 billion people have yet to choose their operating system.
Hall noted that Sourceforge has 850,000 registered developers and about 85,000 projects in development.
Even if only 10% of that number is actually actively developing software, it is more people than Microsoft's 50,000 employees, he said. Of those, he said, 22,000 are marketing and sales people, and he estimates no more than 2,000 to 3,000 of the employees are actually handson software developers. As such, he said, the Linux community is much bigger than Microsoft.
Eighty companies on the show floor
About 80 companies spanned approximately 60 booths on the trade show floor. There was the usual mix of Linux companies, geek shirt vendors, bookstores, and littleknown companies who would pounce on passing media badges in the faint hope that one of them might be so impressed as to get word out about their products.
One company did manage to catch my attention in this manner called Net Integration Technologies. They sell what The Real World Linux 2004 trade show floor.
they call an autonomic Linuxbased server operating system named Nitix. The person I spoke to explained that it was derived from a Debian kernel a bit of a red flag, as he insisted that yes, it was from a Debian kernel but that it was now mostly original code.
I asked if Nitix or its components were released under the GPL and I was told that, no, the (advertised as Linuxbased) operating system was developed in house in spite of its Linux roots. Perhaps it deserves a little more investigation to see if the salesman knew what he was talking about, or if the company really is selling an inhouse developed Linuxbased Debian kernel with only original code.
update Net Integration Technologies spokesperson Sandra Lemaitre wrote me informing me that I had been misinformed and included the following:
In our Nitix server operating system we use a standard Linux kernel (not Debian, as you were told), and about 80% of the software that we use (maybe even more than that) is released under the GPL or other open source license. All changes that we have made to this code are freely available on our open source web site: http://open.nit.ca/.
The autonomic bits of our Nitix OS are proprietary to us, but the rest of it (Apache, Samba, etc.) belong to the open source community, and we contribute to the community a fair bit by submitting patches to be used by the rest of the world when we fix something in one of the open source components.
Our company takes great pride in being part of the Open Source community; two of our companys founders, Avery Pennarun and Dave Coombs, have been involved with the Linux operating system for over ten years and codeveloped the popular WvDial application. Actually, many of our developers have produced and contributed to numerous Open Source and commercial software projects.
Real World Linux 2004 Conference in review
The first day was the most useful and informative of the three days to people who came to the conference looking to learn something. The threehour small group sessions provided an opportunity for presenters to go into enough detail to be informative to those attending.
The conference's attendance was around 2,500 people over the course of the three days. In its first year, 2003, Real World Linux mustered approximately 1,700 people in the midst of Toronto's SARS (Severe Acute Respiratory Syndrome) outbreak.
Originally posted to Linux.com 2004-04-16; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 19:22 on
April 16, 2004
Real World Linux 2004, Day 2: Keynotes
TORONTO Real World Linux 2004 Conference and Expo is under way this week at the Metro Convention Center in Canada's largest city, and NewsForge is there. Day 2 of the conference saw a lot more people and a lot more happening. (Go here for a report on Day 1.) Wednesday's first keynote featured Novell's Nat Friedman, who talked for well over an hour.
Friedman's address began with a series of screen shots providing all longtime Linux users present with flashbacks to days gone by. The first was of X with its internal default background and the tiny window manager (twm), an analog clock, a calculator, and a calendar. This he called "Linux Desktop: 1992."
Next up, "Linux Desktop: 1995," showed tremendous progress with the inclusion of an xterm running the elm mail client, a digital clock, and the fvwm window manager, a window manager with actual functionality and multi virtual desktop support. It also had the Netscape Web browser.
"Linux Desktop: 1997" showed new progress with the addition of an early version of the GNOME desktop environment and a spread sheet written, as Nat pointed out, by Miguel de Icaza. By 1997, he noted, even xterm was configurable.
The slide of "Today's Linux Desktop" showed OpenOffice.org and Evolution. OpenOffice.org, he mentioned, is a descendant of StarOffice, which Sun bought a few years ago from Marco Boerries and released to the public, complete with 6.5 million lines of Germancommented code.
The slides did a very good job of making the point that Linux has made significant progress over the last several years.
Linux's Zeitgeist is good
For the next part of his presentation, he discussed Google's Zeitgeist charts, their weekly search term, search operating system, and other search engine related statistics release. In the Google Zeitgeist, Linux is listed as being the host of 1% of the Web browsers that connect to Google. Curious for more information, he contacted Google to find out if the 1% was correct, or rounded. He was told that it is just shy of 1.5%, and pretty soon it should jump straight to 2%.
By and large, Friedman told the packed room, Linux adoption is coming from the Unix market. Linux systems are more cost effective than their Unix counterparts, and it only makes business sense to switch to it from the old Unices. Linux' adoption rate the percentage increase in new users, he added, is faster than that of Apple.
He went on to list a number of Linux adoption success stories, including Largo, Fla.'s 900 users; Sao Paulo Brazil's 10,000; Munich, Germany's 14,183 Linux desktops across seven departments at the city government level; Spain's 300,000; and Thailand's estimated 1 million Linux desktop deployments.
In Spain, he explained, the school districts in two states aim to have one Linux computer available for every two students. It has reached the point there that students ask to use the "GNU/Linux" rather than the computer, and parents ask to use Mozilla rather than the Web. The 300,000 computers running Linux instead of a proprietary operating system also means the Spanish government can keep the money it is spending inside the country by contracting Spanish citizens to do the work. Friedman pointed the audience to linex.org for more information about the Spanish deployment, and Thailinux.org for the Thai deployment of a million Linux computers.
Friedman told of a bank in Brazil called Banrisul that has adapted Linux for use on all its ATMs and is so proud of the fact, all their ATMs display an image of Tux in the bottom left corner.
The city of Largo has converted its fleet of computers for all aspects of civil administration to Linux, using thin clients computers that boot off the network and run all their software off a central server, rather than functioning as an independent computer right down to the displays in their emergency vehicles using CDMA cellular wireless Internet connectivity.
The city of Largo, he told us, buys their thin clients off eBay for as little as they can pay for them and keeps them stockpiled in case one ceases to function, that being a far cheaper solution than using conventional computers.
Friedman's heritage is from the Ximian project, founders of the GNOME desktop environment. Ximian was recently bought by Novell, and Novell is a major sponsor of Real World Linux 2004. Novell's logo is plastered everywhere, right down to our ID badges at the conference. It was only natural that he'd talk at least a little about his new employer.
Novell, he said, is adopting Linux on an accelerated schedule. The goal is to have it deployed in 50% of the company on a fulltime basis i.e., no dualbooting other operating systems by October 31, 2004, roughly 3,000 desktop systems. At present the company is up to about 1,000 installed Linux desktop systems.
The major barriers to corporate adoption of Open Source Software, Friedman said, are: usability application availability interoperability management and administration Problems more 'perceived than real' His basic take on the problems is that they are more perceived than real. For usability, he argued that users learn patterns in whatever they do, and so are used to them. The key to solving the problem is pursuing intuitive and robust applications. Applications should make logical sense to people using them, and should not crash.
Apple, he said, created the "Apple Human Interface Guidelines" from which the idea to create the GNOME Human Interface Guidelines was born. The guidelines outline, he said, down to the pixel how things should look and work to allow the best user experience.
One thing he demonstrated as being an ease of use and matter of intuition problem was the "Apply" button in many configuration programs. He demonstrated a simple change in "gconf" which changed the size of icons without pressing "Apply." What real life analogy does the Apply button have? he asked.
In real life, he said, you don't pour water out of the pitcher and then press "Apply" to have it show up in the glass.
Duplication of effort, he said, is an inherent fact of open source development. It is not, however, a problem, unless and until two projects reach the point of specialization where they can no longer be sanely reintegrated. KDE and GNOME, he said, are not at that point yet, but do risk getting there.
Code duplication is useful to the community, he argued, because it means every permutation of a problem's solution will be tried. The best one can then continue to live.
After Friedman's presentation, I found myself at Dr. Jasper Kamperman's presentation on a qualitative comparison of open source versus proprietary code.
Jasper works for Reasoning, Inc., which analyses source code for otherwise virtually undetectable, or at least unfindable, problems, such as uninitialized variables, memory leaks, and resource leaks, depending on the language.
Due to an NDA he could not give us the name of the proprietary software he used for the comparison, so it is anybody’s guess what it was. Without knowing it, it is difficult to qualify his presentation as useful.
For the sake of his presentation, he compared Linux kernel 2.4.19’s TCP/IP stack with an unnamed commercial TCP/IP stack. Apache, Tomcat, and MySQL were also compared to commercial implementations of the same types of programs.
His fundamental conclusion was that at the development stage, open and closed source software are about comparable in terms of the number of errors per thousand lines of code, but open source software tends to have fewer errors per thousand lines of code by the final release. He presumes that this is due to peer review.
Executives recognizing value of Linux
In the afternoon, Anne LotzTurner of CATAALLIANCE moderated a panel consisting of Ross Button, VP of emerging technologies at CGI, Joseph Dal Molin, president of Ecology Corporation, and Jon "maddog" Hall, executive director of Linux International.
LotzTurner opened the panel by discussing a survey in which 60% of those queried are corporate executives. Of the 60%, she told the audience, 13% did not include open source software as part of their longterm strategy. Fiftyfive percent acknowledged that their company uses open source software for something, anywhere from as mundane as a Web server to as complex as their entire company. Thirty percent said they have made a conscious decision to use open source software.
Survey respondents told the surveyors that their key factors in deciding what software to use are, in order:
Open source, she noted, is strong in all those categories.
The problems, the respondents said, were:
- Intellectual property concerns
- Timeconsuming to research open source options
After her fastpaced introduction, she asked Hall to start the discussion by answering the question, what are the biggest obstacles to open source adoption?
His answer was to the point: Inertia.
Companies, he said, aren't using open source because the applications they want to use for their specific specialized purpose are not supported under Linux. The companies that make the applications don't want to make the applications available under Linux or other open source operating systems because no companies are using them. It's a vicious circle.
Linux, Hall said, first gained fame from super computer clusters running it and later from embedded systems. Linux has conquered most markets, but the desktop battle is closer to the final frontier than the first fight.
Before 1980, he told the audience and fellow panelists, companies hired other companies or individuals to write software for their purpose. If the software wasn't adequately functional or documented, the contracted company simply wasn't paid. In so doing, companies had tight control of their software.
Since 1980, companies have taken to using prepacked software programs, and have forfeited that control.
Photo: Ross Button, Joseph Dal Molin, and Jon "maddog" Hall are introduced by moderator Anne LotzTurner.
He described the pre1980 system as the first wave in software development, the packaged sets as the second wave, and open source software as the third wave. The one that is starting to wash over.
Joe Molin was asked by the moderator why he thought Canada has been slow on the uptake for open source software.
Molin's answer was as simple as Hall's: inertia.
He focused more on mindset, though, and described open source not as a product, but as a paradigm.
Open source, he said, has been around a long time under other names. Peerreviewed medicine and science have been around a lot longer than open source as we know it, and it is a tried and true way to develop medical technology.
Companies and organizations need an environment to explore open source that is free of the propaganda and FUD (fear, uncertainty, and doubt) so prolific on the Internet, he said. Companies need to experiment with open source and they will find that they benefit, Molin said.
Ripple effect of Y2K
When Button's turn came up, he discussed the ripple effect of the Y2K upgrade craze. Companies the world over, he pointed out, spent a lot of money and time upgrading their computer systems and looking over ancient but still usable code. That was five years ago, and many companies are now at a point where they are seeking to upgrade.
Sixty to seventy percent of corporate IT budgets, he said, go to maintaining existing infrastructure. The balance can go to researching and purchasing new equipment.
The moderator returned to Hall and asked him what should be done to overcome the inertia he had talked about earlier.
His response was that companies need to not be concerned about how hard a transition is and how much retraining will cost at first, but to look at what they already have. Many companies will find that in some offices employees are already running Linux, and some may be running it at home. Many people may pre exist the move to Windows at larger companies and still remember old Unix mainframes. In short, companies need to figure out what knowledge their employees already have.
Instead of converting existing projects over to Linux, he suggested, companies should have new projects use it. People do not have to be retrained if they're trained into Linux in the first place.
A member of the audience and the panel discussed the fact that if companies share out proprietary source code, they will get more in return. The costs of maintaining that code internally can exceed the cost of publicizing the proprietary code and having the community at large assist in its maintenance.
Another member of the audience asked for the panelists' opinion on the ongoing fight between SCO and the Linux community.
Hall responded that adoption was really not being affected. With large corporate backers like HP and IBM ignoring the threat, smaller companies are following along on the assumption that if the threat had any merit, those large companies would be acting differently.
Calgary tells of its experience
Immediately following the panel discussion, another keynote presentation started upstairs. This one was presented by D.J. Coppersmith of HewlettPackard Co. and Dan Ryan of the City of Calgary, Alberta.
Coppersmith's section of the presentation was little more than a buzzwordinfested pitch for HP products and services supported by a professionally made PowerPoint presentation. His purpose was to introduce how wonderful HP was because it helped the city of Calgary convert to Linux.
Ryan's presentation was a little more interesting. The city of Calgary, approaching 1 million in population, is in the process of converting to Linux, and generally is very Internetaware. Ryan said that about 85% of Calgarians are on the Internet 62% of them using it on a daily basis. The city has also grown very quickly over the last number of years, but the IT budget, not being of political importance to the city council, has not grown with it.
Calgary started a series of pilot projects to convert its old Unix servers to Linux on x86 hardware. The pilots went so well that the city went ahead with plans to switch to Linux. Ryan said that it was a done deal, that there was no going back in the foreseeable future.
Linux has allowed the city of Calgary to reduce C.J. Coppersmith and Dan Ryan take questions following their keynote address. the number of servers it needs, lower hardware, licensing and maintenance costs, and improve performance on the city's database systems on the order of 200% to 600% improvement over the old systems.
Ryan said that processes that used to take 60 hours to do on their 8CPU UNIX servers could be completed in only 13.5 hours on their 2processor Intel systems running Linux.
The only hiccup they encountered is that they needed to upgrade from Oracle 8i to Oracle 9i, but they took advantage of the opportunity to downgrade their license to a perCPU standard license instead of a more complex enterpriselevel license.
One of the key factors he credited with the success of the move was the involvement of the city employees with the migration. By utilizing their input, city administrators found that morale was high and more could be accomplished.
The switch from Unix to Linux is already saving the city of Calgary $500,000 per year from their IT budget,
Ryan said. The result was that old computers, not employees, got laid off.
Originally posted to Linux.com 2004-04-15; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 01:50 on
April 15, 2004
Real World Linux 2004, Day 1: A real world experience
Real World Linux 2004 Conference and Expo is taking place this year at the Metro Toronto Convention Center, North building, next to the Canadian National Tower in the middle of Canada's largest city.
On Tuesday morning I left home on the 6:10 a.m. city bus to the train station, arriving in Toronto about half an hour before the start of the day's tutorial sessions.
After orienting myself in an otherwise unfamiliar environment, I set about registering my presence. I'd been in touch with RWL04's media publicist, Stephanie Cole, and had been assured that I was registered prior to arrival, but I still needed to pick up my badge.
Symbiotic relationship between press and conferences
I should note that, unlike most conference attendees, reporters and others working for media organisations do not normally pay to attend conferences and are seldom constrained in where they can go or what they can do. This is not out of the kindness of organizers' hearts, but because media organisations and conferences have a symbiotic relationship. Media organsiations get free and limitless access in return for the coverage they bring to the conference.
At about 8:40 I began waiting in line and chatted with a few fellow attendees. After a few minutes, it was my turn at the desk. I gave the person behind the counter my name and indicated that I should be under their "media" registration list. They were not expecting any media prior to the show and didn't have any media passes ready. A few minutes passed and just a couple of minutes after 9 a.m. I was on my way, badge around my neck. I was making progress.
I walked down the hall seeking the session I intended to attend. I was greeted at the door by a small man whose job can best be described as a bouncer. He scanned a barcode on my badge and allowed me into the room. Inside, another conference official, an usher, pointed me to a stack of handouts for the presentation. I was now sitting at the back of a small sessional room where "Tuning and Cutomizing a Linux System" author Dan Morrill was giving a threehour tutorial entitled "Linux Customization: Three Case Studies," already in progress due to the delay at the registration desk.
Dan's presentation was a case study of his home computer network. He set up an NFS file server, a firewall, and an arcadestyle game console using three Linux computers running the Fedora distribution of Linux. When I arrived, he was discussing the basic structure of an operating system and how UNIX and UNIXbased operating systems live off a single root process known as "init"; he also touched on Sys5 versus BSDstyle initialization scripts.
After an explanation of the basics of running anything under Linux, he set about explaining what exactly it was that he had done that warranted a presentation.
I must say I was a little apprehensive that anyone could make a threehour tutorial on how to set up an NFS server, firewall, and game server, but Dan did a pretty good job of it.
He made the point repeatedly that on a new installation of Linux it is necessary to remove unnecessary packages from your system. If you are running a computer that is attached to the Internet, remove services you are not using lest they become the source of a security compromise that could otherwise have easily been prevented.
Also, he noted, having extra packages kicking around is a waste of the computer's resources.
Dan Morrill explains how to set up firewall scripts under Linux.
NFS: Simpler than first thought
His first project was the creation of an NFS server, which he demonstrated by mounting a local partition remotely on his laptop using an overhead projector. I must confess that I've never actually used NFS before, and it is a lot simpler than I thought.
After typing only a small handful of commands, he could change directories to a "remote" directory that was really local. He explained that NFS is a trusting protocol and assumes that whatever the client tells it is probably true. Thus if you have an NFS server and an NFS client, if the client tells the server that its user is "cdlu" (which is actually represented numerically to the computer), the server will believe it and give that client computer access to all cdlu's files. As such, he advised NFS be used with care, and all hosts that are not supposed to have access should not. This can be accomplished using firewall scripts.
His second project was not all that interesting, but to those not familiar with it, it was probably the most useful and informative. It won't be rehashed here, but he explained how to set up a firewall under Linux kernels 2.4 and 2.6, with proper forwarding rules within his home network so that not everything had to run on his gateway.
His third project was the most interesting but the least useful of the three. He discussed in detail how he took a computer, installed Linux on it, and ran it with only a TV out card and an arcade control pad of some description. He built a box around it, put a TV on an angle in it, and thus made himself a true arcadegame console. Remember them? They were the things used in the movie "War Games" before David Lightman wardialed his way into the U.S. military computer and started a simulated thermonuclear war.
After a brief questionandanswer period, his presentation was complete and everyone was given a copy of his book. He signed for those who requested it.
After lunch I found my way over to Marcel Gagn�'s presentation on migrating to Linux. His enthusiasm and casual approach dominated the room and his presentation was both interesting and informative.
As Marcel prepared to start, he held up a book he wrote called "Moving to Linux: Kiss the Blue Screen of Death Goodbye!." Ironically, the overhead project attached to his laptop refused to cooperate and gave a blue screen with the words "No signal. Help?" The projector was changed, and he started by announcing that, no, Marcel doesn't like PowerPoint.
He was casual. It was not a oneway discussion but a seminar in which he became the teacher and the audience his students.
Wake up and smell the penguin
The presentation was oriented to the IT guy who needs to smack the company he works for and get them to wake up and smell the penguin, and to that end he gave many good pieces of advice. He was also very clear that Linux is not the beall and endall of modern computing but is a very good and costeffective tool for the majority of companies.
Those interested in introducing their companies to Linux should do so in stages, he argued.
Marcel Gagne explains how to get business users to switch to Linux.
OpenOffice.org will run in Windows and is a good launching point. A Windows user's first experience in Linux, he said, should be in KDE, notwithstanding the civil wars such comments can elicit from the GNOME community.
KDE is not necessarily his first choice for his own use, he said, but as the most mature and fullfeatured and similar in functionality and feel to Windows desktop environment, KDE is the place to start.
Crossover Office from CodeWeavers, VMWare, and Win4Lin, are all options for those Windows applications that users still need, and can ease the transition, Marcel said.
In an effort to show us the merits of the various programs, he demonstrated Crossover Office happily running Microsoft Office programs without requiring a Windows installation on the computer. It appeared to work well for more programs than just Microsoft Office. He was not, however, able to show us Win4Lin, because he did not have a copy available. His demonstration of VMWare failed when Windows simply refused to load, apparently having something to do with the fact his laptop was not on his home network.
PDFs save costly Windows upgrade
At a business in the area, he told us, a company was refusing to move to Linux out of habit. Eventually, he showed the president of the company OpenOffice.org's ability to save a file directly as a PDF, and pretty soon the whole company was using Linux instead of making a costly upgrade to a new version of Windows. Such is the power of OpenOffice.org.
Among the other selling points of Linux to push in a business environment, he told us, is the native ability of many desktop environments to run multiple virtual desktops. Instead of having people spend unending amounts of time minimizing and chasing programs around the screen, having separate desktops for each user is a benefit that many users have not considered. Gaim, the Xbased instant messenger client, and KDE's Kdict are two other important pieces of software for users new to the experience of Linux. Kdict allows users to highlight words and click a button on the bottom menu to get definitions of the word from a variety of Web dictionaries.
Just in case your business types aren't yet satisfied, the fullfeatured browser Mozilla and its stripped down brother, Firebird, and their ability to do anything from block popup ads to limit cookies as needed, should provide that extra push, he said.
But whatever you do, give your boss something concrete with which to work. Something like a Knoppix CD, he said, is the perfect place to start. It provides them with completely working Linux and absolutely no risk to their existing systems.
Recycling old computers as dumb servers
Marcel spent a large part of his presentation dealing with a topic with which many people aren't familiar.
He explained the concept of recycling otherwise obsolete computers as thin clients, essentially dumb terminals that can run Linux. He has gone into this topic in depth before at Linux Journal.
The Linux Terminal Server Project and ROMoMatic.net are the places to start, he said. Setting up an old hard driveless 486 or a purposebuilt thin client can be more cost effective and environmentally friendly than upgrading to a computer. For many businesses, thin clients are the best way to go, as the actual uses of the computer are not so process intensive as to require new systems.
As a sidebar, Marcel mentioned that a company called linuxant.com allows people to pay for a software package that will allow drivers written for Windows to work under Linux for hardware such as winmodems or 802.11g fast wireless internet cards that lack native Linux drivers to run under Linux.
In short, Marcel's presentation gave all present a strong selection of arguments and tools with which to pursuade the companies they work for to see the light and move to Linux, though he did emphasize one last time at the end: Linux is not always perfect for everyone in every situation.
After Marcel's presentation, I meandered back over to Union Station to catch the train home, there being no more events for the day.
Originally posted to Linux.com 2004-04-14; reposted here 2019-11-23.
words - whole entry and permanent link. Posted at 01:41 on
April 14, 2004
(RSS) Website generating code and content © 2001-2020 David Graham <email@example.com>, unless otherwise noted. All rights reserved. Comments are © their respective authors.