Recently I was helping a friend who had written a song. We wanted to produce some sheet music so I though it would be worth checking out any open source music authoring software.
Many years ago I looked at software in this space and some promised much but in the end, none seemed to deliver on any front. At that time, shareware windows software was all that could be found.
This search was a lot more successful. It did not take me long to find MuseScore. This is a cross platform, open source sheet music editor (with playback). The windows version worked out of the box and we soon had the basics mastered; chords, notes, repeats, lyrics etc.
My next instinct was to get it to work on Linux, specifically CentOS-6. After installing a few requirements from rpmforge, I had the code compiled and running (with just a few minor bugs). Once I know that it worked, I wanted to make an RPM. Unfortunately, musescore (or mscore as it is called under Linux) uses an unusual build procedure based on cmake. Adding that into an RPM seemed a little daunting so I found the closest thing I could on the internet to use as a starting point.
Opensuse has an RPM but it would not build on CentOS. After a few simple tweaks I had the package building but it was not being packaged correctly. The %makeinstall macro was trying to install files into the filesystem and not the package buildroot. After much internet searching and testing, I found the answer (which is quite simple. So simple I don't know why no one had written it down before).
The same as you might use for a normal Makefile, just set the DESTDIR environment variable. My %install now has:
%install
export DESTDIR=%{buildroot}
cd build && %makeinstall
And with a few more minor tweaks I had the RPM finished.
Now I just have a few issues to sort out with playback on Linux. musescore does not like the default ALSA settings or ALSA devices (like my USB headset) disappearing and reappearing when I suspend and resume. Portaudio might fix that but it does not seem to work on my home machine. The topic for another blog no doubt.
Tuesday, December 31, 2013
Sunday, December 22, 2013
Lazarus has come to life
Most of my windows programming is still done in Delphi 5. This was released in 1999 which makes it positively ancient. (Even more scary is that some programs started life in Delphi 2).
Not long after Delphi 5, Borland lost the plot (or prehaps they already had when they changed their name to Inprise). Although Delphi 7 was available, when I looked up upgrading the .net experiment was under way and it seemed like the end of the road for Delphi. I had hoped for a cross platform modern version to appear but that never really happened. Eventually the crazy was transferred to Embarcadero (whatever that is) and I still don't understand what works with what.
While all that was going on, I kept an eye on another program, Free Pascal. Free Pascal always showed such promise but the IDE was reminiscent of programming c++ under DOS.
Finally it seems though, I might have found what I have been looking for. Lazarus, the Delphi styled IDE for Free Pascal. Matured enough for a 1.0 release and soon to be 1.2. Native unicode support and cross platform (Linux, Windows, OS-X).
Although there are a few sharp corners (don't import a unit called strings!), I am planning on converting my projects over to Lazarus and hopefully bringing them back from the dead too. My initial testing has been positive.
With any luck my programs will be playing nice with Windows 7, running on Linux and being recompiled without having to pay any license fees. It has taken a long time but I think it has been worth it.
Now if I could just press for Android support too...
Not long after Delphi 5, Borland lost the plot (or prehaps they already had when they changed their name to Inprise). Although Delphi 7 was available, when I looked up upgrading the .net experiment was under way and it seemed like the end of the road for Delphi. I had hoped for a cross platform modern version to appear but that never really happened. Eventually the crazy was transferred to Embarcadero (whatever that is) and I still don't understand what works with what.
While all that was going on, I kept an eye on another program, Free Pascal. Free Pascal always showed such promise but the IDE was reminiscent of programming c++ under DOS.
Finally it seems though, I might have found what I have been looking for. Lazarus, the Delphi styled IDE for Free Pascal. Matured enough for a 1.0 release and soon to be 1.2. Native unicode support and cross platform (Linux, Windows, OS-X).
Although there are a few sharp corners (don't import a unit called strings!), I am planning on converting my projects over to Lazarus and hopefully bringing them back from the dead too. My initial testing has been positive.
With any luck my programs will be playing nice with Windows 7, running on Linux and being recompiled without having to pay any license fees. It has taken a long time but I think it has been worth it.
Now if I could just press for Android support too...
Thursday, November 21, 2013
CentOS 6 on Apple hardware
I don't know how long Apple have been using intel hardware. I have never owned or used any and I know pretty much nothing about OS-X.
Inevitably, a user asked me if they could have Linux on the their mac. They have both a MacBook Pro and in iMac. I said I would look into it and my findings are listed below.
There are two main areas which cause trouble installing on to Mac hardware, drivers and booting. Drivers is relatively easy. Most hardware is supported by recent OS versions. Booting is much more complex. UEFI, GPT, 64bit support, bootcamp and rEFIt are all intentionally shrouded in mystery and made unnecessarily complex.
The aim is to leave a working OS X install for admin tasks but boot into linux for every day use. The starting point is a working OS X install with GPT partitioning.
The Nvidia MCP89 is not supported. This means no hard disks to install onto.
There are a number of setpci commands floating around the internet which promise to activate the chipset as an AHCI controller. None of the commands I tried could get this to work.
Some other observations:
CentOS 6 UEFI boot did not work. The GRUB menu works but it will not boot any entries. Escape will return you to the menu. Entering quit on the command line will take you back to the OSX boot environment (I was using rEFIt).
I wanted to use our corporate PXE boot environment. I burned an iPXE boot CD which worked once. It successfully booted into pxelinux and booted CentOS-6 installer. After the initial success, I could not get it to work ever again. It would lock up after prompting to press ^B.
Installation was a bit involved and seems to vary depending on the current version of OS X and bootcamp. The procedure was:
Installation was done using our PXE network install. Substituting a CentOS Install CD for iPXE would presumably also work.
I will update this page if I identify any ways to simplify the process.
Inevitably, a user asked me if they could have Linux on the their mac. They have both a MacBook Pro and in iMac. I said I would look into it and my findings are listed below.
There are two main areas which cause trouble installing on to Mac hardware, drivers and booting. Drivers is relatively easy. Most hardware is supported by recent OS versions. Booting is much more complex. UEFI, GPT, 64bit support, bootcamp and rEFIt are all intentionally shrouded in mystery and made unnecessarily complex.
The aim is to leave a working OS X install for admin tasks but boot into linux for every day use. The starting point is a working OS X install with GPT partitioning.
MacBook Pro 7,1 (13-inch, Mid 2010)
Status: Not SupportedThe Nvidia MCP89 is not supported. This means no hard disks to install onto.
There are a number of setpci commands floating around the internet which promise to activate the chipset as an AHCI controller. None of the commands I tried could get this to work.
Some other observations:
CentOS 6 UEFI boot did not work. The GRUB menu works but it will not boot any entries. Escape will return you to the menu. Entering quit on the command line will take you back to the OSX boot environment (I was using rEFIt).
I wanted to use our corporate PXE boot environment. I burned an iPXE boot CD which worked once. It successfully booted into pxelinux and booted CentOS-6 installer. After the initial success, I could not get it to work ever again. It would lock up after prompting to press ^B.
iMac 27"
Status: Supported (using bootcamp and rEFIt)Installation was a bit involved and seems to vary depending on the current version of OS X and bootcamp. The procedure was:
- Boot into OSX
- Install rEFIt (not rEFInd) (and reboot two times to activate)
- Run bootcamp (because we will not use EFI booting)
- Create a fat32 filesystem to keep bootcamp happy
- Boot off a CentOS-6 CD
- Remove the fat32 partition
- Create a /boot partition (/dev/sda3)
- Create a LVM partition (/dev/sda4)
- Select /dev/sda as the boot loader location
- Complete the install and reboot
- Boot into the CD rescue mode
- open a shell
- chroot /mnt/sysimage
- grub-install /dev/sda
- parted /dev/sda
- toggle 3 legacy boot
- quit
- exit
- exit
- reboot
- Open rEFIt partition tool
- Synchronise partition tables
- Boot linux from rEFIt
- Install ATI binary video drivers
- Done
iMac 7,1 (20-inch, Mid 2007)
Status: Supported (using rEFIt)Installation was done using our PXE network install. Substituting a CentOS Install CD for iPXE would presumably also work.
- Boot into OSX
- Install rEFIt (and reboot two times to activate)
- Boot into OSX
- Run the Disk Utility
- 'Partition' the disk and change the size of 'Macintosh HD' to make room for linux (I used 50Gig)
- Download iPXE and burn to CD
- Reboot into rEFIt
- Boot of iPXE CD (by selecting the CD icon from rEFIt)
- PXE boot into the CentOS-6 installer
- Select 'Use free space'
- Install boot loader to /dev/sda
- Complete the install and reboot
- Reboot into CentOS rescue mode
- open a shell and run these commands:
- chroot /mnt/sysimage
- grub-install /dev/sda (probably not needed if grub was installed to /dev/sda during the install)
- parted /dev/sda
- toggle 3 legacy boot
- quit
- yum update (there seems to be a bug in the kernel/initd shipped with 6.4 where a fresh install can't find the hard disk. Fixed with an update)
- exit
- exit
- reboot
- Open rEFIt partition tool
- Synchronise partition tables
- Boot linux from rEFIt
- ATI binary video drivers are not compatable
- Done
Conclusion
It is possible to run CentOS on your Mac. The procedure is complex and unpredictable.I will update this page if I identify any ways to simplify the process.
Friday, November 1, 2013
XBMC on CentOS 6
After years of putting it off I have finally taken the plunge into XBMC.
My plan is to hook it up as a front end for the Digital ORB pvr.
RpmFusion have packages for XBMC version 11 for CentOS 6. This is a good start as many of the audio & video packages from RpmForge (which I am sure most CentOS 6 users have) are too old for XBMC.
Unfortunately there is no package for XBMC version 12 which apparently has much better pvr support. The bug database entry suggested that it would be relatively easy to fix... and it was https://bugzilla.rpmfusion.org/show_bug.cgi?id=2699
So now I have a relatively stable base to start my integration. Lets hope that I get it working before XBMC version 13 comes out and changes everything again.
If you want to run XBMC yourself, you will need my xbmc and taglib packages from http://www.chrysocome.net/downloads and any dependencies from rpmfusion and epel.
You can also get the Digital ORB from my downloads directory. It does come with instructions but has not been widely tested yet. Stay tuned.
My plan is to hook it up as a front end for the Digital ORB pvr.
RpmFusion have packages for XBMC version 11 for CentOS 6. This is a good start as many of the audio & video packages from RpmForge (which I am sure most CentOS 6 users have) are too old for XBMC.
Unfortunately there is no package for XBMC version 12 which apparently has much better pvr support. The bug database entry suggested that it would be relatively easy to fix... and it was https://bugzilla.rpmfusion.org/show_bug.cgi?id=2699
So now I have a relatively stable base to start my integration. Lets hope that I get it working before XBMC version 13 comes out and changes everything again.
If you want to run XBMC yourself, you will need my xbmc and taglib packages from http://www.chrysocome.net/downloads and any dependencies from rpmfusion and epel.
You can also get the Digital ORB from my downloads directory. It does come with instructions but has not been widely tested yet. Stay tuned.
Monday, October 21, 2013
Large filesystem on CentOS-6
I had the pleasure of testing out a new server which came with 10 x 4Tb drives. Configured on and HP Smart Array P420 in RAID 50 that gave me a 32Tb drive to use.
I installed CentOS-6 but the installer would not allow me to use the full amount of space. It turns out the maximum ext4 filesystem on CentOS-6 is 16Tb. This is the limit of 32bit block addressing and 4k blocks. (232 * 4k = 232 * 212= 244=240 * 24 = 16T)
That did not make for a very exciting test. Only half the disk available was available in a single filesystem. Sure I could create two logical volumes and then create two filesystems but that seemed a bit like a DOS solution.
Some research turned up the options of using ext4 48bit block addressing. This enables a filesysem up to 1Eb in size and allows for future 64bit addressing for an even larger limit.
The catch was of course that 48bit addressing is not supported by the tools which come with CentOS-6.
Fedora 20 (rawhide) does come with the required updates to e2fsprogs (e2fsprogs-1.42.8-3) which enable 48bit addressing (somehow future-proofed by being called 64bit addressing). I then set out to rebuild the fedora 20 rpm for CentOS-6. Amazingly the compile was clean though I did have to disable some tests which is a tad alarming. All in all it did not take long to produce a set of replacement rpms for e2fsprogs.
After installing the new tools I was able to create a new filesystem larger than 16Tb. I started at 17Tb to give me room to play. It seems that the CentOS-6 kernel does already have support for 48bit addressing. I ran a number of workloads and I could not find any problems. Still, I don't know what bugs may be lurking in there.
dumpe2fs shows the new filesystem feature '64bit'.
I also attempted to do an offline resize of the filesystem to the maximum size of my disk. This just worked as expected. Online resize is not available until a much later kernel version. I did not attempt this because there are know bugs.
The last limit I wanted to test was the file size limit. Even on my new filesystem the individual file size limit is 16Tb. It is not every day that I get to make one of them so I did
dd if=/dev/zero of=big bs=1M count=$(expr 16 \* 1024 \* 1024)
At ~500Mb/s it still took 9 hours to complete.
And it turns out that the maximum file size is actually 17592186040320 and not 17592186044416. That is 4k (one block) short of 16Tb. (You can actually check that on any machine with the command truncate big --size 17592186040321 )
That also raised an issue which was a problem on ext3. How long does it take to delete a large file? Well this is the largest possible file and it took 13 minutes.
In conclusion, the 16Tb filesystem limit is easliy raised on CentOS-6 but it comes at the expense of using untested tools and kernel features. Although I did not find any problems in my testing, this could pose a substantial risk if you have over 16Tb of data which you do not want to loose.
If anyone is interested in my rpms you can get them from http://www.chrysocome.net/download
I installed CentOS-6 but the installer would not allow me to use the full amount of space. It turns out the maximum ext4 filesystem on CentOS-6 is 16Tb. This is the limit of 32bit block addressing and 4k blocks. (232 * 4k = 232 * 212= 244=240 * 24 = 16T)
That did not make for a very exciting test. Only half the disk available was available in a single filesystem. Sure I could create two logical volumes and then create two filesystems but that seemed a bit like a DOS solution.
Some research turned up the options of using ext4 48bit block addressing. This enables a filesysem up to 1Eb in size and allows for future 64bit addressing for an even larger limit.
The catch was of course that 48bit addressing is not supported by the tools which come with CentOS-6.
Fedora 20 (rawhide) does come with the required updates to e2fsprogs (e2fsprogs-1.42.8-3) which enable 48bit addressing (somehow future-proofed by being called 64bit addressing). I then set out to rebuild the fedora 20 rpm for CentOS-6. Amazingly the compile was clean though I did have to disable some tests which is a tad alarming. All in all it did not take long to produce a set of replacement rpms for e2fsprogs.
After installing the new tools I was able to create a new filesystem larger than 16Tb. I started at 17Tb to give me room to play. It seems that the CentOS-6 kernel does already have support for 48bit addressing. I ran a number of workloads and I could not find any problems. Still, I don't know what bugs may be lurking in there.
dumpe2fs shows the new filesystem feature '64bit'.
I also attempted to do an offline resize of the filesystem to the maximum size of my disk. This just worked as expected. Online resize is not available until a much later kernel version. I did not attempt this because there are know bugs.
The last limit I wanted to test was the file size limit. Even on my new filesystem the individual file size limit is 16Tb. It is not every day that I get to make one of them so I did
dd if=/dev/zero of=big bs=1M count=$(expr 16 \* 1024 \* 1024)
At ~500Mb/s it still took 9 hours to complete.
And it turns out that the maximum file size is actually 17592186040320 and not 17592186044416. That is 4k (one block) short of 16Tb. (You can actually check that on any machine with the command truncate big --size 17592186040321 )
That also raised an issue which was a problem on ext3. How long does it take to delete a large file? Well this is the largest possible file and it took 13 minutes.
In conclusion, the 16Tb filesystem limit is easliy raised on CentOS-6 but it comes at the expense of using untested tools and kernel features. Although I did not find any problems in my testing, this could pose a substantial risk if you have over 16Tb of data which you do not want to loose.
If anyone is interested in my rpms you can get them from http://www.chrysocome.net/download
Monday, September 23, 2013
Windows 8?
My first brush win WinPE turned out to be rather successful http://blog.chrysocome.net/2013/02/pxe-boot-winpe.html but recently our Windows team upgraded the SCCM server. Now when I attempt to install Windows 7 in KVM on CentOS-6, I get a windows 8 logo and then the dreaded error 0x0000005D.
The cause for this is, as always, long and complex. The new SCCM release now uses WinPE version 4 which is bases on Windows 8. Windows 8 requires a minimum level of CPU features to run. If you don't meet the minimum you get a well worded error message (well, at least it is easier to search for than a BSOD report).
I can't do much about SCCM, WinPE or Windows 8 so the next part of the problem is why does my KVM virtual machine not meet the Windows 8 requirements?
You guessed it. Bugs! It seems (more or less) that the 'sep' cpu feature was forgotten by libvirt and there is no fix coming soon.
What is needed then is a well implemented work around. KVM does support the required flag (-cpu +sep) but libvirt has no method to pass the flag to kvm. I already have a wrapper around kvm http://blog.chrysocome.net/2013/05/can-kvm-guest-found-out-who-its-host-is.html so it seemed logical to extend that script to add the missing flag.
Below is my solution which adds the +sep flag to the existing CPU configuration as well as set the serial number, as per the original script. Installation is the same as shown in my other blog post, edit the guest and set the <emulator> path to /usr/local/libexec/qemu-kvm (either using virsh edit or your favourite XML editor).
/usr/local/libexec/qemu-kvm
#!/bin/bash
# This is a wrapper around qemu which will supply
# DMI information
# and correct a bug with the CPU type required for winpe4 (Windows 8)
max=${#@}
index=0
for i in $(seq 1 $max) ; do
p=${@:$i:1}
if [ "$p" = "-cpu" ] ; then
(( index = $i + 1 ))
break
fi
done
if [ $index -gt 0 ] ; then
cpu=${@:$index:1}
cpu="$cpu,+sep"
(( ibefore = $index - 1 ))
(( iafter = $index + 1 ))
set -- "${@:1:$ibefore}" $cpu "${@:$iafter}"
fi
if [ "$1" = "-name" ] ; then
SERIAL=$(/usr/bin/hal-get-property --udi /org/freedesktop/Hal/devices/computer --key system.hardware.serial)
exec /usr/libexec/qemu-kvm "$@" -smbios type=1,serial="KVM-$SERIAL"
else
exec /usr/libexec/qemu-kvm "$@"
fi
The cause for this is, as always, long and complex. The new SCCM release now uses WinPE version 4 which is bases on Windows 8. Windows 8 requires a minimum level of CPU features to run. If you don't meet the minimum you get a well worded error message (well, at least it is easier to search for than a BSOD report).
I can't do much about SCCM, WinPE or Windows 8 so the next part of the problem is why does my KVM virtual machine not meet the Windows 8 requirements?
You guessed it. Bugs! It seems (more or less) that the 'sep' cpu feature was forgotten by libvirt and there is no fix coming soon.
What is needed then is a well implemented work around. KVM does support the required flag (-cpu +sep) but libvirt has no method to pass the flag to kvm. I already have a wrapper around kvm http://blog.chrysocome.net/2013/05/can-kvm-guest-found-out-who-its-host-is.html so it seemed logical to extend that script to add the missing flag.
Below is my solution which adds the +sep flag to the existing CPU configuration as well as set the serial number, as per the original script. Installation is the same as shown in my other blog post, edit the guest and set the <emulator> path to /usr/local/libexec/qemu-kvm (either using virsh edit or your favourite XML editor).
/usr/local/libexec/qemu-kvm
#!/bin/bash
# This is a wrapper around qemu which will supply
# DMI information
# and correct a bug with the CPU type required for winpe4 (Windows 8)
max=${#@}
index=0
for i in $(seq 1 $max) ; do
p=${@:$i:1}
if [ "$p" = "-cpu" ] ; then
(( index = $i + 1 ))
break
fi
done
if [ $index -gt 0 ] ; then
cpu=${@:$index:1}
cpu="$cpu,+sep"
(( ibefore = $index - 1 ))
(( iafter = $index + 1 ))
set -- "${@:1:$ibefore}" $cpu "${@:$iafter}"
fi
if [ "$1" = "-name" ] ; then
SERIAL=$(/usr/bin/hal-get-property --udi /org/freedesktop/Hal/devices/computer --key system.hardware.serial)
exec /usr/libexec/qemu-kvm "$@" -smbios type=1,serial="KVM-$SERIAL"
else
exec /usr/libexec/qemu-kvm "$@"
fi
Monday, September 9, 2013
Using xorg without a mouse
I always like it when I learn something new. Particularly when I read something one the web which turn out to work.
I wanted to find a way to move the mouse cursor without a mouse. I can actually do this with my lirc based remote but I have a new wireless keyboard for my linux 'TV' and I wanted to have a way to use that too.
A google search turned up an interesting page which suggested the X.org has this functionality built in and to my surprise it worked. It even had instructions for keyboards without a numeric keypad (like mine). Apparently "MouseKeys" and "PlotMode" has always been a feature in X.org and XFree86.
I must be in PlotMode at the moment which is quite slow. I will have to practice enabling Accelerated mode <Fn><Alt><Shift><Num Lock> should be the key I need.
I wanted to find a way to move the mouse cursor without a mouse. I can actually do this with my lirc based remote but I have a new wireless keyboard for my linux 'TV' and I wanted to have a way to use that too.
A google search turned up an interesting page which suggested the X.org has this functionality built in and to my surprise it worked. It even had instructions for keyboards without a numeric keypad (like mine). Apparently "MouseKeys" and "PlotMode" has always been a feature in X.org and XFree86.
I must be in PlotMode at the moment which is quite slow. I will have to practice enabling Accelerated mode <Fn><Alt><Shift><Num Lock> should be the key I need.
Thursday, August 1, 2013
puppet augeas and sudo
I wanted to configure some sudo rules using puppet.
The default sudo config has an directory called /etc/sudoers.d which makes dropping in the actual entries rather easy:
file { "/etc/sudoers.d/example" :
ensure => present,
owner => 'root',
group => 'root',
mode => 0440,
content => template('example/sudo.erb'),
}
but alas, the default RHEL6 sudo has requiretty set which prevented my sudo rules from working correctly.
Naturally I wanted to use augeas to remove that flag but it turned into a nightmare trifecta of puppet + augeas + sudo. Three tools with so much potential and a great lack of real world documentation.
I remember having battled with this before and giving up. This time I was determined to succeed. I revisited the only information on the internet but I still could not get it to work. After looking at the code for the sudo lens I was pretty sure that I had the correct version and eventually I was pointed in the right direction. Instead of removing the requiretty I needed to negate it. After some more mucking around I came up with a working incantation:
augeas { "turn off sudo requiretty":
changes => [
'set /files/etc/sudoers/Defaults[*]/requiretty/negate ""',
],
}
I hope that will be of use to someone.
The default sudo config has an directory called /etc/sudoers.d which makes dropping in the actual entries rather easy:
file { "/etc/sudoers.d/example" :
ensure => present,
owner => 'root',
group => 'root',
mode => 0440,
content => template('example/sudo.erb'),
}
but alas, the default RHEL6 sudo has requiretty set which prevented my sudo rules from working correctly.
Naturally I wanted to use augeas to remove that flag but it turned into a nightmare trifecta of puppet + augeas + sudo. Three tools with so much potential and a great lack of real world documentation.
I remember having battled with this before and giving up. This time I was determined to succeed. I revisited the only information on the internet but I still could not get it to work. After looking at the code for the sudo lens I was pretty sure that I had the correct version and eventually I was pointed in the right direction. Instead of removing the requiretty I needed to negate it. After some more mucking around I came up with a working incantation:
augeas { "turn off sudo requiretty":
changes => [
'set /files/etc/sudoers/Defaults[*]/requiretty/negate ""',
],
}
I hope that will be of use to someone.
Thursday, July 18, 2013
Sparse files on Windows
Once again I am drawn away from Linux to solve a Windows problem. The source of the problem is Hyper-V which (as always) has a cryptic error message about 'cannot open attachment' and 'incorrect file version'.
The source of the error was tracked down to the file being flagged as Sparse.
What is a sparse file?
Under UNIX/Linux, a sparse file is a file where not all of the storage for the file has been allocated. Handling of sparse files is normally transparent but some tools like file copy and backup programs can handle sparse files more efficiently if they know where the sparse bits are. Getting that information can be tricky.
In contrast, under windows, a sparse file is a file which has the sparse flag set. Presumably the sparse flag is set because not all of the storage for the file has been allocated (much like under Linux). Interestingly, even if all the storage is allocated, the sparse flag may still be set. (It seems the flag indicates the potential to be sparse rather than actually being sparse. There is an API to find out the actual sparse parts).
The the problem started when I happened to download a Hyper-V virtual machine using BitTorrent. When the files are being created, not all of the content exists so it is indeed sparse. Once all the content has been supplied, the file is (to my mind anyway) no longer sparse. However, under windows it seems, once a sparse file, always a sparse file.
Microsoft provide a tool to check and set the sparse flag:
fsutil sparse queryflag <filename>
fsutil sparse setflag <filename>
Note 1: Have they not heard of get and set
Note 2: You can't use a wildcard for <filename>
The amazing thing to note here is that there is no clearflag option. This might lead you to believe that you can not do that. In fact you can. For users in a pickle, there is a program called Far Manager which can (among other things) clear the flag. Far Manager is open source and a quick peek at the code shows that it uses a standard IOCTL to do this named FSCTL_SET_SPARSE.
So with that knowledge, it is actually quite easy to make a file not be sparse any more. In fact, I wrote a program called unsparse.
Not only does the tool have the ability to clear the sparse flag, it can recursively process a directory and unsparse all the sparse files found, making it perfect to fix up a Hyper-V download.
Look for the program soon on my chrysocome website http://www.chrysocome.net/unsparse
The source of the error was tracked down to the file being flagged as Sparse.
What is a sparse file?
Under UNIX/Linux, a sparse file is a file where not all of the storage for the file has been allocated. Handling of sparse files is normally transparent but some tools like file copy and backup programs can handle sparse files more efficiently if they know where the sparse bits are. Getting that information can be tricky.
In contrast, under windows, a sparse file is a file which has the sparse flag set. Presumably the sparse flag is set because not all of the storage for the file has been allocated (much like under Linux). Interestingly, even if all the storage is allocated, the sparse flag may still be set. (It seems the flag indicates the potential to be sparse rather than actually being sparse. There is an API to find out the actual sparse parts).
The the problem started when I happened to download a Hyper-V virtual machine using BitTorrent. When the files are being created, not all of the content exists so it is indeed sparse. Once all the content has been supplied, the file is (to my mind anyway) no longer sparse. However, under windows it seems, once a sparse file, always a sparse file.
Microsoft provide a tool to check and set the sparse flag:
fsutil sparse queryflag <filename>
fsutil sparse setflag <filename>
Note 1: Have they not heard of get and set
Note 2: You can't use a wildcard for <filename>
The amazing thing to note here is that there is no clearflag option. This might lead you to believe that you can not do that. In fact you can. For users in a pickle, there is a program called Far Manager which can (among other things) clear the flag. Far Manager is open source and a quick peek at the code shows that it uses a standard IOCTL to do this named FSCTL_SET_SPARSE.
So with that knowledge, it is actually quite easy to make a file not be sparse any more. In fact, I wrote a program called unsparse.
Not only does the tool have the ability to clear the sparse flag, it can recursively process a directory and unsparse all the sparse files found, making it perfect to fix up a Hyper-V download.
Look for the program soon on my chrysocome website http://www.chrysocome.net/unsparse
Friday, June 21, 2013
@reboot jobs will be run at computer's startup.
"@reboot jobs will be run at computer's startup."
What on earth does that mean?
These days RedHat use cronie as the system cron daemon. Described as 'based on the original cron' I think it is a fork of vixi-cron which was used until EL5.
For some time, both of these cron daemons have had an @reboot syntax which allows you to run scripts at (more or less) boot time (when the cron daemon is started). This allows users to start long running processes without the sysadmin having to write an initscript.
It also happens that from time to time, the cron daemon crashes. This is not ideal because cron is the very tool which can be used to periodically confirm that a daemon has not crashed. For now I have added a check to puppet to ensure that the cron daemon is running.
When the cron daemon is started, it logs some messages, one of which is the cryptic:
@reboot jobs will be run at computer's startup.
message. I understand that it is trying to tell me something but it fals short of conveying the message. The internet did not have much to say on the topic either so I had to resort to the source code.
The source of the message is from within the run_reboot_jobs function. The function first checks the existence of a (so called) lock file. The file is
/var/run/cron.reboot
If this file is present, the message is printed out and none of the @reboot jobs are run. If the file is absent, it is created and then the @reboot jobs are queued up to be run.
Perhaps the message should read:
Lock file /var/run/cron.reboot present. @reboot jobs have already been run. skipping.
So that is the mystery almost solved. The remaining details are that during boot, the rc.sysinit script removed a number of stale lock files, including the contents of /var/run. This ensures that at boot time, the cron daemon runs the @reboot jobs.
If you wanted to re-run the @reboot jobs without rebooting your server, you can easily trick it with:
rm /var/run/cron.reboot
service crond restart
Perhaps that could also be added to the init script so you run
service crond restart-boot
What on earth does that mean?
These days RedHat use cronie as the system cron daemon. Described as 'based on the original cron' I think it is a fork of vixi-cron which was used until EL5.
For some time, both of these cron daemons have had an @reboot syntax which allows you to run scripts at (more or less) boot time (when the cron daemon is started). This allows users to start long running processes without the sysadmin having to write an initscript.
It also happens that from time to time, the cron daemon crashes. This is not ideal because cron is the very tool which can be used to periodically confirm that a daemon has not crashed. For now I have added a check to puppet to ensure that the cron daemon is running.
When the cron daemon is started, it logs some messages, one of which is the cryptic:
@reboot jobs will be run at computer's startup.
message. I understand that it is trying to tell me something but it fals short of conveying the message. The internet did not have much to say on the topic either so I had to resort to the source code.
The source of the message is from within the run_reboot_jobs function. The function first checks the existence of a (so called) lock file. The file is
/var/run/cron.reboot
If this file is present, the message is printed out and none of the @reboot jobs are run. If the file is absent, it is created and then the @reboot jobs are queued up to be run.
Perhaps the message should read:
Lock file /var/run/cron.reboot present. @reboot jobs have already been run. skipping.
So that is the mystery almost solved. The remaining details are that during boot, the rc.sysinit script removed a number of stale lock files, including the contents of /var/run. This ensures that at boot time, the cron daemon runs the @reboot jobs.
If you wanted to re-run the @reboot jobs without rebooting your server, you can easily trick it with:
rm /var/run/cron.reboot
service crond restart
Perhaps that could also be added to the init script so you run
service crond restart-boot
Wednesday, May 29, 2013
chkconfig priorities
I recently had a problem on a CentOS-6 machine where my dhcp server did not start at boot because it was serving a virtual network interface which did not yet exist.
The best solution to this problem would be for the dhcp server to start up and wait for the network interface to come up. Many other network tools do this successfully so I don't know why dhcpd should be different. That problem is however too big for me to fix on my server so I need a more simple approach.
To solve this, I would like to adjust my dhcp server to start after the virtual network interface service. CentOS-6 uses the (old) RedHat style init scripts (with some LSB configuration too). RedHat like to proclaim that they use upstarts now but all the upstarts do is make a call to the /etc/rc.d/rc script, just like init used to do.
So, the problem is still based around the System V style scripts which have magic comments which define when to start & stop the scripts.
This is the relevant parts of the header for the dhcp server (/etc/rc.d/init.d/dhcpd):
### BEGIN INIT INFO
# Provides: dhcpd
# Default-Start:
# Default-Stop:
# Should-Start: portreserve
# Required-Start: $network
# Required-Stop:
# Short-Description: Start and stop the DHCP server
# Description: dhcpd provides the Dynamic Host Configuration Protocol (DHCP)
# server.
### END INIT INFO
#
# The fields below are left around for legacy tools (will remove later).
#
# chkconfig: - 65 35
Despite the comments about being legacy, when installed using chkconfig, the priorities used are indeed Start 65, Kill 35. We can confirm this with a simple ls
ls /etc/rc.d/rc?.d/*dhcpd
The virtual network is also controlled by an init script with a more modest header (/etc/rc.d/init.d/vand):
# chkconfig: 2345 95 05
# description: Virtual Area Network Deamon
Not surprisingly, this will Start at 95 and Kill at 05.
The first idea I had was to modify the dhcpd init scrip directly. Unfortunately this script is not marked as config file (in the rpm) so when a new version comes out, my changes will be lost. I need something better than that.
It is not immediately obvious but chkconfig does have the ability to alter the priorities. This is called override (but not to be confused with the command line parameter --override).
I was unable to find and documentation on the override but I did work out that his minimal config file was all that was needed:
/etc/chkconfig.d/dhcp:
### BEGIN INIT INFO
# Required-Start: vand
### END INIT INFO
When chkconfig reads the headers from the dhpcd init script, it will also read this file (because it has the same basename) and override the values from the init script.
All that is needed is to apply the settings with:
chkconfig dhcpd on
Some information about LSB init scripts can be found here http://refspecs.linuxbase.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-generic/initscrcomconv.html
The final job is to make puppet aware of my changes. Perhaps in my next post.
The best solution to this problem would be for the dhcp server to start up and wait for the network interface to come up. Many other network tools do this successfully so I don't know why dhcpd should be different. That problem is however too big for me to fix on my server so I need a more simple approach.
To solve this, I would like to adjust my dhcp server to start after the virtual network interface service. CentOS-6 uses the (old) RedHat style init scripts (with some LSB configuration too). RedHat like to proclaim that they use upstarts now but all the upstarts do is make a call to the /etc/rc.d/rc script, just like init used to do.
So, the problem is still based around the System V style scripts which have magic comments which define when to start & stop the scripts.
This is the relevant parts of the header for the dhcp server (/etc/rc.d/init.d/dhcpd):
### BEGIN INIT INFO
# Provides: dhcpd
# Default-Start:
# Default-Stop:
# Should-Start: portreserve
# Required-Start: $network
# Required-Stop:
# Short-Description: Start and stop the DHCP server
# Description: dhcpd provides the Dynamic Host Configuration Protocol (DHCP)
# server.
### END INIT INFO
#
# The fields below are left around for legacy tools (will remove later).
#
# chkconfig: - 65 35
Despite the comments about being legacy, when installed using chkconfig, the priorities used are indeed Start 65, Kill 35. We can confirm this with a simple ls
ls /etc/rc.d/rc?.d/*dhcpd
The virtual network is also controlled by an init script with a more modest header (/etc/rc.d/init.d/vand):
# chkconfig: 2345 95 05
# description: Virtual Area Network Deamon
Not surprisingly, this will Start at 95 and Kill at 05.
The first idea I had was to modify the dhcpd init scrip directly. Unfortunately this script is not marked as config file (in the rpm) so when a new version comes out, my changes will be lost. I need something better than that.
It is not immediately obvious but chkconfig does have the ability to alter the priorities. This is called override (but not to be confused with the command line parameter --override).
I was unable to find and documentation on the override but I did work out that his minimal config file was all that was needed:
/etc/chkconfig.d/dhcp:
### BEGIN INIT INFO
# Required-Start: vand
### END INIT INFO
When chkconfig reads the headers from the dhpcd init script, it will also read this file (because it has the same basename) and override the values from the init script.
All that is needed is to apply the settings with:
chkconfig dhcpd on
Some information about LSB init scripts can be found here http://refspecs.linuxbase.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-generic/initscrcomconv.html
The final job is to make puppet aware of my changes. Perhaps in my next post.
Friday, May 24, 2013
What happened to java -client?
On our shared server, users are limited to 25 processes. A login script further reduces the soft limit by another 5 processes. This safety margin allows the user to log in an kill something which is using up all their processes. If they want they can raise their soft limit back up to the hard limit.
This works well for most things but java insists on using lots of processes. Confusingly, it prints this message:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# ./hs_err_pid27297.log
The log file is not of much use. It repeats the claim about 'insufficient memory' which is not true and none of the listed solutions will help.
The problem is that the default garbage collector (GC) scales up with the number of CPU cores. Our old server had 8 but the new server has 24. This is easily pushing the users over the 20 thread soft limit.
I don't if the log file lists the number of java threads. If it does I can't work out how to read it. If it could be found it could be compared with the number of available processes:
expr $(ulimit -u) - $(ps x | wc --lines)
In the past I used to solve this problem by adding this to the login script:
alias java="java -client"
but that does not work any more. I don't know when it was removed but it leaves me with the headache of finding a new workaround.
After some research I found this page https://wiki.csiro.au/pages/viewpage.action?pageId=545034311 which suggested using
-XX:ParallelGCThreads=1 -XX:+UseParallelGC
or even just
-XX:+UseSerialGC
This seems to solve the immediate problem but still is not all that useful. To specify the flag to run java you need this syntax:
java -XX:+UseSerialGC
but for javac you have to specify like this:
javac -J-XX:+UseSerialGC
and I don't expect anyone should have to remember that. One simple solution is to alias both those commands but there are many other tools which are part of java which would also need aliases. These other tools may (and do) use their own unique command line parameters which I would rather not have to learn.
Luckily there is also an environment variable you can set like this:
export _JAVA_OPTIONS="-XX:ParallelGCThreads=1 -XX:+UseParallelGC"
however, that makes all the java programs print out this annoying message:
Picked up _JAVA_OPTIONS: -XX:ParallelGCThreads=1 -XX:+UseParallelGC
And the word on the internet is that there is no way to suppress that message.
My solution to use an alias for the most common tools java and javac while keeping the environment variable for all other programs. At first I tried to use this in my alias:
_JAVA_OPTIONS= /usr/bin/java -XX:+UseSerialGC
but an empty variable still shows the message (and best I can tell, bash won't unset a variable like that). I needed to unset it so I had to combine the env command like this:
env -u _JAVA_OPTIONS -XX:+UseSerialGC
My final profile script looks like this:
export _JAVA_OPTIONS="-XX:ParallelGCThreads=1 -XX:+UseParallelGC"
#export _JAVA_OPTIONS="-XX:+UseSerialGC"
alias java="env -u _JAVA_OPTIONS $(which --skip-alias java) $_JAVA_OPTIONS"
alias javac="env -u _JAVA_OPTIONS $(which --skip-alias javac)$(for i in $_JAVA_OPTIONS ; do echo -n " -J$i" ; done)"
You can select which GC you want. I have demonstrated with the more complex example which has two options to be prefixed for javac.
The java users can now happily compile away without java needing the entire machine resources for every invocation.
This works well for most things but java insists on using lots of processes. Confusingly, it prints this message:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# An error report file with more information is saved as:
# ./hs_err_pid27297.log
The log file is not of much use. It repeats the claim about 'insufficient memory' which is not true and none of the listed solutions will help.
The problem is that the default garbage collector (GC) scales up with the number of CPU cores. Our old server had 8 but the new server has 24. This is easily pushing the users over the 20 thread soft limit.
I don't if the log file lists the number of java threads. If it does I can't work out how to read it. If it could be found it could be compared with the number of available processes:
expr $(ulimit -u) - $(ps x | wc --lines)
In the past I used to solve this problem by adding this to the login script:
alias java="java -client"
but that does not work any more. I don't know when it was removed but it leaves me with the headache of finding a new workaround.
After some research I found this page https://wiki.csiro.au/pages/viewpage.action?pageId=545034311 which suggested using
-XX:ParallelGCThreads=1 -XX:+UseParallelGC
or even just
-XX:+UseSerialGC
This seems to solve the immediate problem but still is not all that useful. To specify the flag to run java you need this syntax:
java -XX:+UseSerialGC
but for javac you have to specify like this:
javac -J-XX:+UseSerialGC
and I don't expect anyone should have to remember that. One simple solution is to alias both those commands but there are many other tools which are part of java which would also need aliases. These other tools may (and do) use their own unique command line parameters which I would rather not have to learn.
Luckily there is also an environment variable you can set like this:
export _JAVA_OPTIONS="-XX:ParallelGCThreads=1 -XX:+UseParallelGC"
however, that makes all the java programs print out this annoying message:
Picked up _JAVA_OPTIONS: -XX:ParallelGCThreads=1 -XX:+UseParallelGC
And the word on the internet is that there is no way to suppress that message.
My solution to use an alias for the most common tools java and javac while keeping the environment variable for all other programs. At first I tried to use this in my alias:
_JAVA_OPTIONS= /usr/bin/java -XX:+UseSerialGC
but an empty variable still shows the message (and best I can tell, bash won't unset a variable like that). I needed to unset it so I had to combine the env command like this:
env -u _JAVA_OPTIONS -XX:+UseSerialGC
My final profile script looks like this:
export _JAVA_OPTIONS="-XX:ParallelGCThreads=1 -XX:+UseParallelGC"
#export _JAVA_OPTIONS="-XX:+UseSerialGC"
alias java="env -u _JAVA_OPTIONS $(which --skip-alias java) $_JAVA_OPTIONS"
alias javac="env -u _JAVA_OPTIONS $(which --skip-alias javac)$(for i in $_JAVA_OPTIONS ; do echo -n " -J$i" ; done)"
You can select which GC you want. I have demonstrated with the more complex example which has two options to be prefixed for javac.
The java users can now happily compile away without java needing the entire machine resources for every invocation.
Sunday, May 12, 2013
Can a KVM guest find out who it's host is?
Our puppet configuration performs a number of checks on our asset database to make sure things are recorded correctly.
One of the properties we record for virtual machines is their location (which host they are running on).
By default, KVM does not expose this information even though there are many ways it could technically be done.
Using bits and pieces from around the internet I have come up with a process where I can pass the serial number of the host into the guest.
qemu permits you to specify many DMI values on the command line like this:
qemu-kvm ... -smbios type=1,serial="MY-SERIAL"
The virtual machine will see this value and puppet will automatically create a fact with this value.
Unfortunately, libvirt does not use this mechanism and the serial number is blank in the virtual machine. The libvirt specification permits setting a value in the .xml config file but it is still not used.
I would like to insert a value which is the same as the host serial but with a prefix to indicate that this is indeed a virtual machine and then a suffix to make sure the serial is unique (such as the vm name).
Initially I used dmidecode to get the host serial number
dmidecode -s system-serial-number
but to run that you must be root and using sudo from the qemu user turned out to be a PITA. In the end I settled for
/usr/bin/hal-get-property --udi /org/freedesktop/Hal/devices/computer --key system.hardware.serial
which uses dbus but means I can run it without being root.
The next step was a shim around qemu-kvm which could add the command line parameters. On RHEL/CentOS 6, the binary lives in /usr/libexec so I put my wrapper in /usr/local/libexec (I think that is the first time I have ever used that directory). When a vm is being started, the first parameter is -name and then the machine name is specified (at least in the current EL6 version. This was not the case in a previous release so it could change). I check to see if that has been specified because qemu-kvm is also invoked by libvirt to check it's configuration/capabilities which does not require the dmi serial (although it does not hurt).
/usr/local/libexec/qemu-kvm
#!/bin/bash
# This is a wrapper around qemu which will supply
# DMI information
if [ "$1" = "-name" ] ; then
SERIAL=$(/usr/bin/hal-get-property --udi /org/freedesktop/Hal/devices/computer --key system.hardware.serial)
exec /usr/libexec/qemu-kvm "$@" -smbios type=1,serial="KVM-$SERIAL-$2"
else
exec /usr/libexec/qemu-kvm "$@"
fi
The final step is to tell libvirt to use my new shim rather than the qemu-kvm binary directly. This also does not seem optimal but can be done by editing every guest and setting the <emulator> path to /usr/local/libexec/qemu-kvm
(either using virsh edit or your favourite XML editor).
One of the properties we record for virtual machines is their location (which host they are running on).
By default, KVM does not expose this information even though there are many ways it could technically be done.
Using bits and pieces from around the internet I have come up with a process where I can pass the serial number of the host into the guest.
qemu permits you to specify many DMI values on the command line like this:
qemu-kvm ... -smbios type=1,serial="MY-SERIAL"
The virtual machine will see this value and puppet will automatically create a fact with this value.
Unfortunately, libvirt does not use this mechanism and the serial number is blank in the virtual machine. The libvirt specification permits setting a value in the .xml config file but it is still not used.
I would like to insert a value which is the same as the host serial but with a prefix to indicate that this is indeed a virtual machine and then a suffix to make sure the serial is unique (such as the vm name).
Initially I used dmidecode to get the host serial number
dmidecode -s system-serial-number
but to run that you must be root and using sudo from the qemu user turned out to be a PITA. In the end I settled for
/usr/bin/hal-get-property --udi /org/freedesktop/Hal/devices/computer --key system.hardware.serial
which uses dbus but means I can run it without being root.
The next step was a shim around qemu-kvm which could add the command line parameters. On RHEL/CentOS 6, the binary lives in /usr/libexec so I put my wrapper in /usr/local/libexec (I think that is the first time I have ever used that directory). When a vm is being started, the first parameter is -name and then the machine name is specified (at least in the current EL6 version. This was not the case in a previous release so it could change). I check to see if that has been specified because qemu-kvm is also invoked by libvirt to check it's configuration/capabilities which does not require the dmi serial (although it does not hurt).
/usr/local/libexec/qemu-kvm
#!/bin/bash
# This is a wrapper around qemu which will supply
# DMI information
if [ "$1" = "-name" ] ; then
SERIAL=$(/usr/bin/hal-get-property --udi /org/freedesktop/Hal/devices/computer --key system.hardware.serial)
exec /usr/libexec/qemu-kvm "$@" -smbios type=1,serial="KVM-$SERIAL-$2"
else
exec /usr/libexec/qemu-kvm "$@"
fi
The final step is to tell libvirt to use my new shim rather than the qemu-kvm binary directly. This also does not seem optimal but can be done by editing every guest and setting the <emulator> path to /usr/local/libexec/qemu-kvm
(either using virsh edit or your favourite XML editor).
Wednesday, May 1, 2013
ProLiant Virtual Serial Port
While working through the configuration of iLO3 for our HP DL380 G7 servers, I found a few pages talking about the Virtual Serial Port (VSP).
This sounded interesting so I though I would try it out. I was using this page as a guide http://www.fatmin.com/2011/06/redirect-linux-console-to-hp-ilo-via-ssh.html but it was a bit out of date.
The first thing to mention was that this does indeed work with iLO3. When configuring your iLO3, remember that in order to ssh to iLO3 you must have a PTY (-t -t), use protocol version 2 (-2) and use DSA keys (ssh-keygen -t dsa). My old configuration for older versions of iLO used exactly the opposite for all of these settings which took some time to figure out (and HP only recently identified/fixed the bugs with ssh keys).
Once you can ssh to your iLO interface, you can issue the command
VSP
and it will start a terminal emulator (actually, it just passes everything through to your terminal program so you can use xterm if you want but safer to assume a vt102). To exit, press ESCape and then ( and you will return to the iLO prompt.
For the OS configuration, I found that I had to make a few alterations. First, RHEL6/CentOS6 uses upstarts and not inittab. If you are not booting with a serial console you must create your own init script:
/etc/init/ttyS1.conf
# ttyS1 - agetty
#
# This service maintains a agetty on ttyS1 for iLO3 VSP.
stop on runlevel [S016]
start on runlevel [235]
respawn
exec agetty -8 -L -w /dev/ttyS1 115200 vt102
You can then enable the console with the command
start ttyS1
Next time you reboot it should start automatically.
In order to log in as root you must add the terminal it to the /etc/securetty file:
echo '/dev/ttyS1' >> /etc/securetty
So that was all good. Now I want to roll this out to all my servers. To do this I needed some puppet magic. Not surprising, puppet let me down on most of the steps for this.
# Install the init script. This one is easy, just dump in the file
file { "/etc/init/ttyS1.conf":
...
}
# Add the entry to securetty
augeas { "securetty_ttyS1":
# Ha ha. securetty has a special lens which is different to most other config files
context => "/files/etc/securetty",
changes => [ "ins 0 before /files/etc/securetty/1",
"set /files/etc/securetty/0 ttyS1",
],
onlyif => "match *[.='ttyS1'] size == 0",
}
# Enable & start the service. My version of puppet does not support upstarts on CentOS so I can't do this:
service { "ttyS1":
provider => upstart,
ensure => running,
enable => true,
}
# Instead I have created my own type
define upstart($ensure = "running", enable = "true") {
service { "upstart-$name":
provider => 'base',
ensure => $ensure,
enable => $enable,
hasstatus => true,
start => "/sbin/initctl start $name",
stop => "/sbin/initctl stop $name",
status => "/sbin/initctl status $name | /bin/grep -q '/running'",
}
}
upstart{"ttyS1": }
And finally, I need to make sure the server BIOS is configured with the VSP.
package { "hp-health": ensure => present } ->
service { "hp-health":
ensure => running,
hasstatus => true,
hasrestart => true,
enable => true,
} ->
exec { "vsp":
logoutput => "true",
path => ["/bin", "/sbin", "/usr/bin", "/usr/sbin"],
command => "/sbin/hpasmcli -s 'SET SERIAL VIRTUAL COM2'",
unless => "/sbin/hpasmcli -s 'SHOW SERIAL VIRTUAL' | grep 'The virtual serial port is currently COM2'",
require => Class['hp_drivers::service'],
} ->
# while we are messing with the serial ports, make COM1 work as the physical device
exec { "com1":
logoutput => "true",
path => ["/bin", "/sbin", "/usr/bin", "/usr/sbin"],
command => "/sbin/hpasmcli -s 'SET SERIAL EMBEDDED PORTA COM1'",
unless => "/sbin/hpasmcli -s 'SHOW SERIAL EMBEDDED' | grep 'Embedded serial port A: COM1'",
require => Class['hp_drivers::service'],
}
This sounded interesting so I though I would try it out. I was using this page as a guide http://www.fatmin.com/2011/06/redirect-linux-console-to-hp-ilo-via-ssh.html but it was a bit out of date.
The first thing to mention was that this does indeed work with iLO3. When configuring your iLO3, remember that in order to ssh to iLO3 you must have a PTY (-t -t), use protocol version 2 (-2) and use DSA keys (ssh-keygen -t dsa). My old configuration for older versions of iLO used exactly the opposite for all of these settings which took some time to figure out (and HP only recently identified/fixed the bugs with ssh keys).
Once you can ssh to your iLO interface, you can issue the command
VSP
and it will start a terminal emulator (actually, it just passes everything through to your terminal program so you can use xterm if you want but safer to assume a vt102). To exit, press ESCape and then ( and you will return to the iLO prompt.
For the OS configuration, I found that I had to make a few alterations. First, RHEL6/CentOS6 uses upstarts and not inittab. If you are not booting with a serial console you must create your own init script:
/etc/init/ttyS1.conf
# ttyS1 - agetty
#
# This service maintains a agetty on ttyS1 for iLO3 VSP.
stop on runlevel [S016]
start on runlevel [235]
respawn
exec agetty -8 -L -w /dev/ttyS1 115200 vt102
You can then enable the console with the command
start ttyS1
Next time you reboot it should start automatically.
In order to log in as root you must add the terminal it to the /etc/securetty file:
echo '/dev/ttyS1' >> /etc/securetty
So that was all good. Now I want to roll this out to all my servers. To do this I needed some puppet magic. Not surprising, puppet let me down on most of the steps for this.
# Install the init script. This one is easy, just dump in the file
file { "/etc/init/ttyS1.conf":
...
}
# Add the entry to securetty
augeas { "securetty_ttyS1":
# Ha ha. securetty has a special lens which is different to most other config files
context => "/files/etc/securetty",
changes => [ "ins 0 before /files/etc/securetty/1",
"set /files/etc/securetty/0 ttyS1",
],
onlyif => "match *[.='ttyS1'] size == 0",
}
# Enable & start the service. My version of puppet does not support upstarts on CentOS so I can't do this:
service { "ttyS1":
provider => upstart,
ensure => running,
enable => true,
}
# Instead I have created my own type
define upstart($ensure = "running", enable = "true") {
service { "upstart-$name":
provider => 'base',
ensure => $ensure,
enable => $enable,
hasstatus => true,
start => "/sbin/initctl start $name",
stop => "/sbin/initctl stop $name",
status => "/sbin/initctl status $name | /bin/grep -q '/running'",
}
}
upstart{"ttyS1": }
And finally, I need to make sure the server BIOS is configured with the VSP.
package { "hp-health": ensure => present } ->
service { "hp-health":
ensure => running,
hasstatus => true,
hasrestart => true,
enable => true,
} ->
exec { "vsp":
logoutput => "true",
path => ["/bin", "/sbin", "/usr/bin", "/usr/sbin"],
command => "/sbin/hpasmcli -s 'SET SERIAL VIRTUAL COM2'",
unless => "/sbin/hpasmcli -s 'SHOW SERIAL VIRTUAL' | grep 'The virtual serial port is currently COM2'",
require => Class['hp_drivers::service'],
} ->
# while we are messing with the serial ports, make COM1 work as the physical device
exec { "com1":
logoutput => "true",
path => ["/bin", "/sbin", "/usr/bin", "/usr/sbin"],
command => "/sbin/hpasmcli -s 'SET SERIAL EMBEDDED PORTA COM1'",
unless => "/sbin/hpasmcli -s 'SHOW SERIAL EMBEDDED' | grep 'Embedded serial port A: COM1'",
require => Class['hp_drivers::service'],
}
Friday, April 19, 2013
Visual Studio 2010
I am not a fan of Visual Studio. Unfortunately I must use it for some projects. Recently I was forced to upgrade to Windows 7 and Visual Studio 2010. Not wanting to duplicate all my files, I decided to leave them on a network drive and just access them via the network.
Seems like a good idea, after all, why have a network if you store everything locally? Well, it seems that Visual Studio does not like that.
For some settings it will decide to find a suitable local directory for you. For some other settings it leaves you high and dry.
For example, when I try and build my program I get the error:
Error 1 error C1033: cannot open program database '\\server\share\working\project\debug\vc100.pdb' \\server\share\working\project\stdafx.cpp 1 1 project
The internet was of little use which is why I thought I would put it in my blog.
This goes a long way to explain my criticism of Visual Studio. After installing several gig of software, is that the best error message it can come up with? Well I will try the help, Oh, that is online only. After jumping through some hoops, the help tells me that:
This error can be caused by disk error.
Well, no disks errors here. Perhaps it means that it does not like saving the .pdb on a network share. What is a .pdb anyway???
In the end, my solution (can I call it that or as Visual Studio hijacked that word?) was to save intermediate files locally:
And that seems to sort it out.
Seems like a good idea, after all, why have a network if you store everything locally? Well, it seems that Visual Studio does not like that.
For some settings it will decide to find a suitable local directory for you. For some other settings it leaves you high and dry.
For example, when I try and build my program I get the error:
Error 1 error C1033: cannot open program database '\\server\share\working\project\debug\vc100.pdb' \\server\share\working\project\stdafx.cpp 1 1 project
The internet was of little use which is why I thought I would put it in my blog.
This goes a long way to explain my criticism of Visual Studio. After installing several gig of software, is that the best error message it can come up with? Well I will try the help, Oh, that is online only. After jumping through some hoops, the help tells me that:
This error can be caused by disk error.
Well, no disks errors here. Perhaps it means that it does not like saving the .pdb on a network share. What is a .pdb anyway???
In the end, my solution (can I call it that or as Visual Studio hijacked that word?) was to save intermediate files locally:
- Open Project -> Properties...
- Select Configuration Properties\General
- Select Intermediate Directory
- Select <Edit...>
- Expand Macros>>
- Edit the value (by double clicking on the macros or just typing in):
- Select OK
- Select OK
And that seems to sort it out.
Thursday, April 4, 2013
Graphing the inputs of PCF8591
As discussed in previous posts, the PCF8591 provides up to four analog inputs. Displaying these inputs as numbers makes it hard to visualise what is really going on so here I will present some code which can graph the values in real time.
I am still using my demo board from I2C Analog to Digital Converter. (Thanks to Martin X for finding the YL-40 the schematic on the The BrainFyre Blog)
It is not clear from the schematic (or looking at the board) but the four inputs are:
So my plan is to graph the four inputs so I can visualise them responding to changes.
Once again, I don't want to set out to teach C programming but I will be introducing a library called curses (actually ncurses) which makes it easy to display a text interface in a terminal window (virtual terminal).
It will also be able to adjust the analog output value using the + and - keys.
I will only add notes where I am doing something new from the example shown in Programming I2C.
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
We need a new header file
#include <ncurses.h>
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
int j;
int key;
Here we do some ncurses setup. This will allow us to check for a keyboard key press without having to wait for one to be pressed.
initscr();
noecho();
cbreak();
nodelay(stdscr, true);
curs_set(0);
This will print out a message on the screen (at the current cursor location which is the top left)
printw("PCF8591");
This will print out some labels for our graph bars. The text is printed at the specified location (row, column)
mvaddstr(10, 0, "Brightness");
mvaddstr(12, 0, "Temperature");
mvaddstr(14, 0, "?");
mvaddstr(16, 0, "Resistor");
We must now call refresh which will cause ncurses to update the screen with our changes
refresh();
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
command[1] = 0;
while(1)
{
for(i = 0; i < 4; i++)
{
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
r = write(fd, &command, 2);
usleep(delay);
// the read is always one step behind the selected input
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
The full range of the analog value 0 - 255 would not fit on most screens so we scale down by a factor of 4. This should fit on a 80x25 terminal nicely.
value[i] = value[i] / 4;
Position the cursor at the start of the bar
move(10 + i + i, 12);
For each position in the graph, either draw a * to show the value or a space to remove any * that might be there from a previous value
for(j = 0; j < 64; j++)
{
if(j < value[i])
{
addch('*');
}
else
{
addch(' ');
}
}
}
refresh();
Check the keyboard and process the keypress
key = getch();
if(key == 43)
{
command[1]++;
}
else if(key == 45)
{
command[1]--;
}
else if(key > -1)
{
break;
}
}
Shutdown ncurses
endwin();
close(fd);
printf("%d\n", key);
return(0);
}
To compile this program you need to use a new flag -l which says to link with the spcecified library (ncurses)
gcc -Wall -o pcf8591d-graph pcf8591d-graph.c -lncurses
When you run the program you should see something like this:
PCF8591
Brightness *****************************************************
Temperature *******************************************************
? *********************
Resistor *****************************************
While it is running the graphs should move as you change the inputs. Try for example shining a torch on the LDR or adjusting the Pot.
You can adjust the green LED with + and - (hint, use - to go from 0 to 255 for maximum effect). Any other key will cause the program to quit.
The graph shows nicely how the inputs can change but it also shows how the value can fluctuate without any input changes. I can't explain exactly why but I would expect much of the fluctuation is because of the lack of a stable external clock/oscillator and/or instability in the reference voltage. Needless to say this is a low cost demo board and may not be exploiting the full potential of the PCF8591.
Here is the complete source code (pcf8591d-graph.c):
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
#include <ncurses.h>
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
int j;
int key;
initscr();
noecho();
cbreak();
nodelay(stdscr, true);
curs_set(0);
printw("PCF8591");
mvaddstr(10, 0, "Brightness");
mvaddstr(12, 0, "Temperature");
mvaddstr(14, 0, "?");
mvaddstr(16, 0, "Resistor");
refresh();
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
command[1] = 0;
while(1)
{
for(i = 0; i < 4; i++)
{
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
r = write(fd, &command, 2);
usleep(delay);
// the read is always one step behind the selected input
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
value[i] = value[i] / 4;
move(10 + i + i, 12);
for(j = 0; j < 64; j++)
{
if(j < value[i])
{
addch('*');
}
else
{
addch(' ');
}
}
}
refresh();
key = getch();
if(key == 43)
{
command[1]++;
}
else if(key == 45)
{
command[1]--;
}
else if(key > -1)
{
break;
}
}
endwin();
close(fd);
printf("%d\n", key);
return(0);
}
I am still using my demo board from I2C Analog to Digital Converter. (Thanks to Martin X for finding the YL-40 the schematic on the The BrainFyre Blog)
DX pcf8591-8-bit-a-d-d-a-converter-module-150190 YL-40 schematic |
It is not clear from the schematic (or looking at the board) but the four inputs are:
- AIN0 - Jumper P5 - Light Dependent Resistor (LDR)
- AIN1 - Jumper P4 - Thermistor
- AIN2 - Not connected
- AIN3 - Jumper P6 - Potentiometer
So my plan is to graph the four inputs so I can visualise them responding to changes.
Once again, I don't want to set out to teach C programming but I will be introducing a library called curses (actually ncurses) which makes it easy to display a text interface in a terminal window (virtual terminal).
It will also be able to adjust the analog output value using the + and - keys.
I will only add notes where I am doing something new from the example shown in Programming I2C.
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
We need a new header file
#include <ncurses.h>
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
int j;
int key;
Here we do some ncurses setup. This will allow us to check for a keyboard key press without having to wait for one to be pressed.
initscr();
noecho();
cbreak();
nodelay(stdscr, true);
curs_set(0);
This will print out a message on the screen (at the current cursor location which is the top left)
printw("PCF8591");
This will print out some labels for our graph bars. The text is printed at the specified location (row, column)
mvaddstr(10, 0, "Brightness");
mvaddstr(12, 0, "Temperature");
mvaddstr(14, 0, "?");
mvaddstr(16, 0, "Resistor");
We must now call refresh which will cause ncurses to update the screen with our changes
refresh();
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
command[1] = 0;
while(1)
{
for(i = 0; i < 4; i++)
{
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
r = write(fd, &command, 2);
usleep(delay);
// the read is always one step behind the selected input
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
The full range of the analog value 0 - 255 would not fit on most screens so we scale down by a factor of 4. This should fit on a 80x25 terminal nicely.
value[i] = value[i] / 4;
Position the cursor at the start of the bar
move(10 + i + i, 12);
For each position in the graph, either draw a * to show the value or a space to remove any * that might be there from a previous value
for(j = 0; j < 64; j++)
{
if(j < value[i])
{
addch('*');
}
else
{
addch(' ');
}
}
}
refresh();
Check the keyboard and process the keypress
key = getch();
if(key == 43)
{
command[1]++;
}
else if(key == 45)
{
command[1]--;
}
else if(key > -1)
{
break;
}
}
Shutdown ncurses
endwin();
close(fd);
printf("%d\n", key);
return(0);
}
To compile this program you need to use a new flag -l which says to link with the spcecified library (ncurses)
gcc -Wall -o pcf8591d-graph pcf8591d-graph.c -lncurses
When you run the program you should see something like this:
PCF8591
Brightness *****************************************************
Temperature *******************************************************
? *********************
Resistor *****************************************
While it is running the graphs should move as you change the inputs. Try for example shining a torch on the LDR or adjusting the Pot.
You can adjust the green LED with + and - (hint, use - to go from 0 to 255 for maximum effect). Any other key will cause the program to quit.
The graph shows nicely how the inputs can change but it also shows how the value can fluctuate without any input changes. I can't explain exactly why but I would expect much of the fluctuation is because of the lack of a stable external clock/oscillator and/or instability in the reference voltage. Needless to say this is a low cost demo board and may not be exploiting the full potential of the PCF8591.
Here is the complete source code (pcf8591d-graph.c):
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
#include <ncurses.h>
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
int j;
int key;
initscr();
noecho();
cbreak();
nodelay(stdscr, true);
curs_set(0);
printw("PCF8591");
mvaddstr(10, 0, "Brightness");
mvaddstr(12, 0, "Temperature");
mvaddstr(14, 0, "?");
mvaddstr(16, 0, "Resistor");
refresh();
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
command[1] = 0;
while(1)
{
for(i = 0; i < 4; i++)
{
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
r = write(fd, &command, 2);
usleep(delay);
// the read is always one step behind the selected input
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
value[i] = value[i] / 4;
move(10 + i + i, 12);
for(j = 0; j < 64; j++)
{
if(j < value[i])
{
addch('*');
}
else
{
addch(' ');
}
}
}
refresh();
key = getch();
if(key == 43)
{
command[1]++;
}
else if(key == 45)
{
command[1]--;
}
else if(key > -1)
{
break;
}
}
endwin();
close(fd);
printf("%d\n", key);
return(0);
}
Tuesday, March 26, 2013
scp files with spaces in the filename
scp is a great tool. Built to run over ssh it maintains a good unix design. It does however cause a few problems with it's nonchalant handling of file names.
For example, transferring a file which has a space in the name causes problems because of the amount of escaping required to get the space to persist on the remote side of the connection.
Copying files from local to remote is easy enough, just quote or escape the file name using your shell.
scp Test\ File remote:
scp 'Test File' remote:
scp "Test File" remote:
Copying files from remote to local can be more tricky
scp remtote:Test\ File .
scp: Test: No such file or directory
scp: File: No such file or directory
scp stimulus:"Test File" .
scp: Test: No such file or directory
scp: File: No such file or directory
scp "stimulus:Test File" .
scp: Test: No such file or directory
scp: File: No such file or directory
The escaping is working on your local machine but the remote is still splitting the name at the space.
To solve that, we need an extra level of escape, one for the remote server. Remember that your shell will eat the \ so you need a double \\ to send it to the remote
scp remote:Test\\\ File .
Test File 100% 0 0.0KB/s 00:00
You can also combine the remote escape with local quoting
scp remote:"Test\ File" .
or
scp "remote:Test\ File" .
Best solution of all, avoid spaces in file names. If you can't avoid it, I find the easiest solution is to replace ' ' with '\\\ '.
For example, transferring a file which has a space in the name causes problems because of the amount of escaping required to get the space to persist on the remote side of the connection.
Copying files from local to remote is easy enough, just quote or escape the file name using your shell.
scp Test\ File remote:
scp 'Test File' remote:
scp "Test File" remote:
Copying files from remote to local can be more tricky
scp remtote:Test\ File .
scp: Test: No such file or directory
scp: File: No such file or directory
scp stimulus:"Test File" .
scp: Test: No such file or directory
scp: File: No such file or directory
scp "stimulus:Test File" .
scp: Test: No such file or directory
scp: File: No such file or directory
The escaping is working on your local machine but the remote is still splitting the name at the space.
To solve that, we need an extra level of escape, one for the remote server. Remember that your shell will eat the \ so you need a double \\ to send it to the remote
scp remote:Test\\\ File .
Test File 100% 0 0.0KB/s 00:00
You can also combine the remote escape with local quoting
scp remote:"Test\ File" .
or
scp "remote:Test\ File" .
Best solution of all, avoid spaces in file names. If you can't avoid it, I find the easiest solution is to replace ' ' with '\\\ '.
Saturday, March 2, 2013
Programming I2C
Although you can perform simple i2c reads and writes using the command line tools i2cget and i2cset, for a more integrated approach you can use a programming language to talk to the bus.
The are dozens of languages which make claims about ease of use and learning etc. and I am sure you can program i2c from them.
What I will demonstrate here is the simple way to do it from c. Although I don't aim to teach how to program in c, I will try and explain what the code is doing so you can follow along even if you are new to c.
This will use some basic i2c read and writes as described at http://www.kernel.org/doc/Documentation/i2c/dev-interface
We will also need to perform some IO Control (ioctl) which are i2c specific.
First we need some code to get us started. The #include basicly make certain function calls and constants available to the rest of our program.
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
All of our code will live in the main function for now. main() is where all c programs start.
We need some variables which must be declared at the start of the function.
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
The [ ] syntax means an array so char command[2] is actually a variable which can hold 2 char values.
Some of our variables have been initialised to specific values so they are ready to use. The ones which have not been initialised will contain random values so we must assign a value to them before they can be used.
0x48 means hexadecimal 48 (which is decimal 72).
Next we print out a banner to show that the program is running
printf("PCF8591 Test\n");
The we get down to business and open the i2c device
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
and select our slave device
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
Now we have an infinite loop
while(1)
{
There will be no way to end the program except by pressing Control C.
Next we have another loop which will run four times
for(i = 0; i < 4; i++)
{
Then we build a command for the pcf8591. The value of this is specified in the data sheet http://doc.chipfind.ru/pdf/philips/pca8591.pdf
In the first 8 bits of the command we will enable the analog output bit (0x40) and select which of the 4 inputs to read ((i + 1) & 0x03). We do a bitwise or to combine these values together with the | symbol.
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
The // is the start of a comment so you can explain you code to the reader.
In the next 8 bits we increment the value for the analog output
command[1]++;
Now we are ready to send the command to the i2c bus
r = write(fd, &command, 2);
It is not clear why, but we need to wait for the command to be processed
usleep(delay);
Now we are ready to read a value. Remembering that the read is always one value behind the selected input (hence the +1 we used above).
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
Then we end the loop
}
and now we can print out our results
printf("0x%02x 0x%02x 0x%02x 0x%02x\n", value[0], value[1], value[2], value[3]);
end our infinite loop
}
and although we may never reach here, we will clean up and quit.
close(fd);
return(0);
}
Now, if you enter all the code into a file called pcf8591d.c (you can copy the complete code as show below) then you are ready to compile it with this command
gcc -Wall -o pcf8591d pcf8591d.c
This says to compile the .c file and write the output (-o) to pcf8591d (if you don't specify an output file the default of a.out will be used which can be a but confusing). -Wall will make sure all warnings are printed out by the compiler.
Assuming the compile (and link) was successful you are ready to run
./pcf8591d
and you should see output like this:
PCF8591 Test
0x5f 0xd3 0xac 0x80
0xc1 0xd3 0xae 0x80
0xc1 0xd3 0xb0 0x80
0xc1 0xd3 0xb2 0x80
0xc1 0xd3 0xb7 0x80
0xc1 0xd3 0xba 0x80
0xc1 0xd3 0xde 0x80
0xc1 0xd3 0xdc 0x80
0xc1 0xd3 0xe0 0x80
To stop the program press ^c (Control + C).
Here is the complete source code:
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
// Written by John Newbigin jnewbigin@chrysocome.net
// Based on info from http://www.kernel.org/doc/Documentation/i2c/dev-interface
// And http://doc.chipfind.ru/pdf/philips/pca8591.pdf
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
printf("PCF8591 Test\n");
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
while(1)
{
for(i = 0; i < 4; i++)
{
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
command[1]++;
r = write(fd, &command, 2);
usleep(delay);
// the read is always one step behind the selected input
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
}
printf("0x%02x 0x%02x 0x%02x 0x%02x\n", value[0], value[1], value[2], value[3]);
}
close(fd);
return(0);
}
In the next blog I will improve the output to print a graph of the values so you can see them move up and down.
The are dozens of languages which make claims about ease of use and learning etc. and I am sure you can program i2c from them.
What I will demonstrate here is the simple way to do it from c. Although I don't aim to teach how to program in c, I will try and explain what the code is doing so you can follow along even if you are new to c.
This will use some basic i2c read and writes as described at http://www.kernel.org/doc/Documentation/i2c/dev-interface
We will also need to perform some IO Control (ioctl) which are i2c specific.
First we need some code to get us started. The #include basicly make certain function calls and constants available to the rest of our program.
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
All of our code will live in the main function for now. main() is where all c programs start.
We need some variables which must be declared at the start of the function.
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
The [ ] syntax means an array so char command[2] is actually a variable which can hold 2 char values.
Some of our variables have been initialised to specific values so they are ready to use. The ones which have not been initialised will contain random values so we must assign a value to them before they can be used.
0x48 means hexadecimal 48 (which is decimal 72).
Next we print out a banner to show that the program is running
printf("PCF8591 Test\n");
The we get down to business and open the i2c device
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
and select our slave device
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
Now we have an infinite loop
while(1)
{
There will be no way to end the program except by pressing Control C.
Next we have another loop which will run four times
for(i = 0; i < 4; i++)
{
Then we build a command for the pcf8591. The value of this is specified in the data sheet http://doc.chipfind.ru/pdf/philips/pca8591.pdf
In the first 8 bits of the command we will enable the analog output bit (0x40) and select which of the 4 inputs to read ((i + 1) & 0x03). We do a bitwise or to combine these values together with the | symbol.
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
The // is the start of a comment so you can explain you code to the reader.
In the next 8 bits we increment the value for the analog output
command[1]++;
Now we are ready to send the command to the i2c bus
r = write(fd, &command, 2);
It is not clear why, but we need to wait for the command to be processed
usleep(delay);
Now we are ready to read a value. Remembering that the read is always one value behind the selected input (hence the +1 we used above).
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
Then we end the loop
}
and now we can print out our results
printf("0x%02x 0x%02x 0x%02x 0x%02x\n", value[0], value[1], value[2], value[3]);
end our infinite loop
}
and although we may never reach here, we will clean up and quit.
close(fd);
return(0);
}
Now, if you enter all the code into a file called pcf8591d.c (you can copy the complete code as show below) then you are ready to compile it with this command
gcc -Wall -o pcf8591d pcf8591d.c
This says to compile the .c file and write the output (-o) to pcf8591d (if you don't specify an output file the default of a.out will be used which can be a but confusing). -Wall will make sure all warnings are printed out by the compiler.
Assuming the compile (and link) was successful you are ready to run
./pcf8591d
and you should see output like this:
PCF8591 Test
0x5f 0xd3 0xac 0x80
0xc1 0xd3 0xae 0x80
0xc1 0xd3 0xb0 0x80
0xc1 0xd3 0xb2 0x80
0xc1 0xd3 0xb7 0x80
0xc1 0xd3 0xba 0x80
0xc1 0xd3 0xde 0x80
0xc1 0xd3 0xdc 0x80
0xc1 0xd3 0xe0 0x80
To stop the program press ^c (Control + C).
Here is the complete source code:
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/i2c-dev.h>
// Written by John Newbigin jnewbigin@chrysocome.net
// Based on info from http://www.kernel.org/doc/Documentation/i2c/dev-interface
// And http://doc.chipfind.ru/pdf/philips/pca8591.pdf
int main( int argc, char **argv )
{
int i;
int r;
int fd;
unsigned char command[2];
unsigned char value[4];
useconds_t delay = 2000;
char *dev = "/dev/i2c-1";
int addr = 0x48;
printf("PCF8591 Test\n");
fd = open(dev, O_RDWR );
if(fd < 0)
{
perror("Opening i2c device node\n");
return 1;
}
r = ioctl(fd, I2C_SLAVE, addr);
if(r < 0)
{
perror("Selecting i2c device\n");
}
while(1)
{
for(i = 0; i < 4; i++)
{
command[0] = 0x40 | ((i + 1) & 0x03); // output enable | read input i
command[1]++;
r = write(fd, &command, 2);
usleep(delay);
// the read is always one step behind the selected input
r = read(fd, &value[i], 1);
if(r != 1)
{
perror("reading i2c device\n");
}
usleep(delay);
}
printf("0x%02x 0x%02x 0x%02x 0x%02x\n", value[0], value[1], value[2], value[3]);
}
close(fd);
return(0);
}
In the next blog I will improve the output to print a graph of the values so you can see them move up and down.
Wednesday, February 27, 2013
PXE boot WINPE
As part of our new Windows 7/AD deployment, SCCM is being used to control the imaging process of desktop computers.
We already have a comprehensive set of PXE enabled boot options so we needed a way to integrate the SCCM tools into our existing PXE setup.
The existing setup is syslinux(pxelinux) 3.11 on CentOS 5, ISC dhcpd and tftp.
We have several Linux live CDs, memtest, novell tools, DOS and chain booting to another (novell) server.
Our Microsoft team provide a bootable CD which we can use to image desktop machines and on some subnets they now provide the ability to PXE boot. Initially I hoped we could just chain boot to the SCCM server but that does not work.
After much research and testing I worked out a process using wimboot. Based on instructions found on http://ipxe.org/howto/sccm and http://forum.ipxe.org/showthread.php?tid=5745 I managed to write a script which converts a CD (ISO image).
The pxelinux entry is:
LABEL ad
com32 ad/linux.c32
append ad/wimboot initrd=ad/winpe.cpio
The files are in a tftp subdirectory called ad
linux.c32 is part of syslinux. I am using version 4.02 which was copied from a CentOS 6 server.
wimboot is available for download from ipxe.
winpe.cpio is a file we are going to create using my script below.
I also required a copy of bootmgr.exe which should be available on your microsoft tftp server but I was unable to find it and in the end I got a copy from our Mircosoft team.
Finally, I needed a copy of wimlib which has a linux version of the imagex tool. I was unable to find a RPM of this package and I just built it from source using the --without-ntfs-3g. (The source comes with .spec files so an RPM should be easy to build).
So here is my script:
convert_cd_into_pxe.sh:
#!/bin/bash
# Written by John Newbigin jnewbigin@chrysocome.net
ISO=x86_PROD_MS.iso
MNTPNT=wim
SCCMFILES=iso
IMAGEX=./wimlib-1.2.5/imagex
unalias cp
if [ "$(whoami)" = "root" ] ; then
if [ ! -f bootmgr.exe ] ; then
echo "You need a copy of bootmgr.exe"
exit 1
fi
umount $SCCMFILES
mkdir $SCCMFILES
mkdir $MNTPNT
mount -o loop $ISO $SCCMFILES
cp $SCCMFILES/boot/bcd BCD
cp $SCCMFILES/boot/boot.sdi .
cp $SCCMFILES/sources/boot.wim .
$IMAGEX mountrw boot.wim 1 $MNTPNT
cp -drv $SCCMFILES/sms/* $MNTPNT/sms/
umount $SCCMFILES
rmdir $SCCMFILES
ARCH=$(grep TsBootShell.exe $MNTPNT/Windows/System32/winpeshl.ini | cut -d \\ -f 4)
# edit winpeshl.ini
perl -pi -e "s|.*$ARCH.*|\"wscript.exe\",\"%SYSTEMDRIVE%\\\\sms\\\\bin\\\\$ARCH\\\\bootstrap.vbs\"|" $MNTPNT/Windows/System32/winpeshl.ini
# install bootstrap.vbs
cat > $MNTPNT/sms/bin/$ARCH/bootstrap.vbs << END
Set os = WScript.CreateObject ( "WScript.Shell" )
os.Run "%COMSPEC%", 7, false
os.Run "%COMSPEC% /c title Initialising... && wpeinit " & _
"&& net start dnscache", 1, true
os.Run WScript.ScriptFullName & "\..\TsmBootStrap.exe /env:WinPE " & _
"/configpath:%SYSTEMDRIVE%\sms\data", 1, true
END
$IMAGEX unmount $MNTPNT --commit
rmdir $MNTPNT
ls BCD bootmgr.exe boot.sdi boot.wim | cpio --create -H newc > winpe.cpio
rm BCD boot.sdi boot.wim
else
echo "You must be root to run this script"
exit 1
fi
I hope this is of use to someone.
We already have a comprehensive set of PXE enabled boot options so we needed a way to integrate the SCCM tools into our existing PXE setup.
The existing setup is syslinux(pxelinux) 3.11 on CentOS 5, ISC dhcpd and tftp.
We have several Linux live CDs, memtest, novell tools, DOS and chain booting to another (novell) server.
Our Microsoft team provide a bootable CD which we can use to image desktop machines and on some subnets they now provide the ability to PXE boot. Initially I hoped we could just chain boot to the SCCM server but that does not work.
After much research and testing I worked out a process using wimboot. Based on instructions found on http://ipxe.org/howto/sccm and http://forum.ipxe.org/showthread.php?tid=5745 I managed to write a script which converts a CD (ISO image).
The pxelinux entry is:
LABEL ad
com32 ad/linux.c32
append ad/wimboot initrd=ad/winpe.cpio
The files are in a tftp subdirectory called ad
linux.c32 is part of syslinux. I am using version 4.02 which was copied from a CentOS 6 server.
wimboot is available for download from ipxe.
winpe.cpio is a file we are going to create using my script below.
I also required a copy of bootmgr.exe which should be available on your microsoft tftp server but I was unable to find it and in the end I got a copy from our Mircosoft team.
Finally, I needed a copy of wimlib which has a linux version of the imagex tool. I was unable to find a RPM of this package and I just built it from source using the --without-ntfs-3g. (The source comes with .spec files so an RPM should be easy to build).
So here is my script:
convert_cd_into_pxe.sh:
#!/bin/bash
# Written by John Newbigin jnewbigin@chrysocome.net
ISO=x86_PROD_MS.iso
MNTPNT=wim
SCCMFILES=iso
IMAGEX=./wimlib-1.2.5/imagex
unalias cp
if [ "$(whoami)" = "root" ] ; then
if [ ! -f bootmgr.exe ] ; then
echo "You need a copy of bootmgr.exe"
exit 1
fi
umount $SCCMFILES
mkdir $SCCMFILES
mkdir $MNTPNT
mount -o loop $ISO $SCCMFILES
cp $SCCMFILES/boot/bcd BCD
cp $SCCMFILES/boot/boot.sdi .
cp $SCCMFILES/sources/boot.wim .
$IMAGEX mountrw boot.wim 1 $MNTPNT
cp -drv $SCCMFILES/sms/* $MNTPNT/sms/
umount $SCCMFILES
rmdir $SCCMFILES
ARCH=$(grep TsBootShell.exe $MNTPNT/Windows/System32/winpeshl.ini | cut -d \\ -f 4)
# edit winpeshl.ini
perl -pi -e "s|.*$ARCH.*|\"wscript.exe\",\"%SYSTEMDRIVE%\\\\sms\\\\bin\\\\$ARCH\\\\bootstrap.vbs\"|" $MNTPNT/Windows/System32/winpeshl.ini
# install bootstrap.vbs
cat > $MNTPNT/sms/bin/$ARCH/bootstrap.vbs << END
Set os = WScript.CreateObject ( "WScript.Shell" )
os.Run "%COMSPEC%", 7, false
os.Run "%COMSPEC% /c title Initialising... && wpeinit " & _
"&& net start dnscache", 1, true
os.Run WScript.ScriptFullName & "\..\TsmBootStrap.exe /env:WinPE " & _
"/configpath:%SYSTEMDRIVE%\sms\data", 1, true
END
$IMAGEX unmount $MNTPNT --commit
rmdir $MNTPNT
ls BCD bootmgr.exe boot.sdi boot.wim | cpio --create -H newc > winpe.cpio
rm BCD boot.sdi boot.wim
else
echo "You must be root to run this script"
exit 1
fi
I hope this is of use to someone.
Subscribe to:
Posts (Atom)