Thursday, October 09, 2014

CentOS 6 on a Dell Latitude 2100

So here at work I have a Dell Latitude 2100 from 2009.
Although to be fair it wasn't mine initially I sort of inherited it.
Anyway it's a half decent system, inxi dump below (some information removed):

System: Host: 2100 Kernel: 3.17.0-1.el6.elrepo.i686 i686 (32 bit)
Desktop: N/A Distro: CentOS release 6.5 (Final) 

Machine: System: Dell (portable) product: Latitude 2100
Mobo: Dell model: 0W785N Bios: Dell v: A06 date: 07/30/2010

CPU: Single core Intel Atom N270 (-HT-) cache: 512 KB
Clock Speeds: 1: 1334 MHz 2: 1067 MHz

Graphics: Card: Intel Mobile 945GSE Express Integrated Graphics Controller
Display Server: X.Org 1.16.0 drivers: intel (unloaded: fbdev,vesa)
Resolution: 5280x877@1.0hz
GLX Renderer: NVIDIA GeForce GT 650M OpenGL Engine
GLX Version: 1.4 (2.1 NVIDIA-10.0.43 310.41.05f01)
 
Audio: Card Intel NM10/ICH7 Family High Definition Audio Controller
driver: snd_hda_intel
Sound: ALSA v: k3.17.0-1.el6.elrepo.i686
 
Network: Card-1: Broadcom NetXtreme BCM5764M Gigabit Ethernet PCIe
driver: tg3
IF: eth0 state: up speed: 1000 Mbps duplex: full
Card-2: Broadcom BCM4322 802.11a/b/g/n Wireless LAN Controller
driver: b43-pci-bridge
IF: wlan0 state: up
 
Drives: HDD Total Size: 250.1GB (3.9% used)
ID-1: /dev/sda model: WDC_WD2500BEVT size: 250.1GB

Anyway it took some doing, but the system is working as I want it to, the details of what I did below:

First I added some additional repositories so now I have the following repositories active:
* atomic
 * base
* centosplus
 * elrepo
 * elrepo-extras
* elrepo-kernel
 * epel
 * extras
* fasttrack
 * ius
 * remi
 * rpmforge
 * rpmforge-extras
 * rpmfusion-free-updates
 * rpmfusion-nonfree-updates
 * updates
 * webtatic

Of course after adding all the repos I did yum -y upgrade to ensure everything was as new and fresh as possible.
I did have to exclude gd from the CentALT repository by adding exclude=gd* to the end of the repo file.
I also installed the kernel-ml from the elrepo-kernel repository and modified grub in /etc/grub.conf to make sure it was the default boot kernel.
I mean there isn't anything wrong with the 2.6 kernel used by default, I just wanted a 3.x kernel

chkconfig NetworkManager on
service NetworkManager start
chkconfig network off  
chkconfig wpa_supplicant off

I soon discovered that my wifi wasn't working.
I confirmed this with dmesg.
A google search later led me to here. I just followed the directions and now wireless works flawlessly.

wget http://bues.ch/b43/fwcutter/b43-fwcutter-018.tar.bz2 http://bues.ch/b43/fwcutter/b43-fwcutter-018.tar.bz2.asc gpg --verify b43-fwcutter-018.tar.bz2.asc tar xjf b43-fwcutter-018.tar.bz2 cd b43-fwcutter-018 make sudo make install cd ..

export FIRMWARE_INSTALL_DIR="/lib/firmware" wget http://www.lwfinger.com/b43-firmware/broadcom-wl-5.100.138.tar.bz2 tar xjf broadcom-wl-5.100.138.tar.bz2 sudo b43-fwcutter -w "$FIRMWARE_INSTALL_DIR" broadcom-wl-5.100.138/linux/wl_apsta.o

modprobe -r b43 bcma

modprobe b43

I made sure everything stuff with a reboot and as expected it did.

The main downside of the 2100 is the 1024x600 resolution. In an effort to set some stuff up and get around this I decided to enable X11 forwarding.
This allowed me test what I did next on my macbook pro which actually worked quite well. 

Browsers and Plugins were next on the agenda, firefox is included by default, but I wanted Chrome.
Unfortunately Google decided that Chrome and CentOS 6 weren't gonna be friends anymore.
I can't run CentOS 7 as it is x86_64 only and this atom isn't.
Anyway after some searching around the Google I found chromium will do what I want so I set out to install it.
    
sudo -i
yum localinstall http://install.linux.ncsu.edu/pub/yum/itecs/public/chromium/rhel6/noarch/chromium-release-1.1-1.noarch.rpm
cd /etc/yum.repos.d wget http://people.centos.org/hughesjr/chromium/6/chromium-el6.repo
yum install chromium

I had already done an ssh -Y to my 2100 from my mac and set out to test that it worked with
/opt/chromium/chrome-wrapper %U

So next step was flash
rpm -ivh http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-adobe-linux

yum -y install flash-plugin nspluginwrapper alsa-plugins-pulseaudio libcurl

In Firefox about:plugins showed it was installed, but unfortunately there was still no flash support in Chromium.
mkdir /tmp/working/
cd /tmp/working/
wget https://dl.google.com/linux/direct/google-chrome-stable_current_i386.rpm
rpm2cpio google-chrome-stable_current_i386.rpm | cpio -idv
mkdir /opt/chromium-browser/PepperFlash/
cp opt/google/chrome/PepperFlash/* /opt/chromium-browser/PepperFlash/

Restart chromium and flash works too!!

The next step was Adobe Reader (yes I hate myself, I know there are lots of PDF readers, but I wanted this one).
sudo -i
cd /tmp wget http://ardownload.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i486linux_enu.rpm 
yum localinstall AdbeRdr9.5.5-1_i486linux_enu.rpm
yum install nspluginwrapper.i686 libcanberra-gtk2.i686 gtk2-engines.i686 PackageKit-gtk-module.i686
yum localinstall AdobeReader_enu nspluginwrapper.i686

Then run acroread to open it and accept the EULA.
If you want your browsers to see it you have to copy some files:
cp /opt/Adobe/Reader9/Browser/intellinux/nppdf.so /usr/lib/mozilla/plugins/ 

Next up working Java plugin
Downloaded the RPM and followed their install instructions

Become root by running su and entering the super-user password.
Uninstall any earlier installations of the Java packages.
rpm -e <package_name>
Change to the directory in which you want to install. Type:
cd <directory path name>
For example, to install the software in the /usr/java/ directory, Type:
cd /usr/java

Install the package.
rpm -ivh jre-7u7-linux-i586.rpm



To configure the Java Plugin follow these steps:
Exit Firefox browser if it is already running.
Create a symbolic link to the libnpjp2.so file in the browser plugins directory
Go to the plugins sub-directory under the Firefox installation directory
cd <Firefox installation directory>/plugins

Create plugins directory if it does not exist.
Create the symbolic link

ln -s <Java installation directory>/lib/i386/libnpjp2.so



Then because I don't already hate myself enough I installed real player
wget http://client-software.real.com/free/unix/RealPlayer11GOLD.rpm
rpm -ivh RealPlayer11GOLD.rpm
realplay

I also installed VLC because it met all the other media dependencies I wanted installed.
yum -y install vlc

There were only two other packages I needed installed at this point SecureCRT and OwnCloud client.
That was just a matter of downloading the rpms and manually installing them.
I use OwnCloud to share my SecureCRT between PCs and I love that SecureCRT lets me access all my remote hosts regardless of my OS. I mean sure any terminal will do for SSH connections, but the convenience of SecureCRT is something I appreciate.

In case you were wondering I was using a pearson site to test all my browser plugins. This was a site I stumbled upon in my college days and it surprisingly still exists.

Tuesday, October 07, 2014

BadUSB – it’s a real threat and not easily fixable;


On Saturday I was sent a news article about “An unfixable USB bug could lead to unstoppable malware

Yesterday the BBC published a story about the same topic, and via a FreeBSD mailing list thread, I found this BlackHat 2014 video from August.

Plus a github repository with code to test the exploit and here is the website for the folks in the video, in case you were curious.

Basically, the take away is two-fold, don’t trust any USB devices that aren’t yours (don’t accepts gifts/swag/freebies) and aside from disabling the physical USB ports on PCs/Servers there isn’t much at the moment you can do to prevent an exploited USB device from doing potential harm to your systems.

On Windows, USB ports can be disabled via Group Policy, Registry hacks, 3rd party software and of course within the BIOS/UEFI.  Non Windows systems have similar abilities to disable USB ports.

That’s right, for those who didn’t check out the links above this exploit is not OS dependent and relies only on a USB port being available on a device.

It’s also not limited to flash drives, even smart phones and other USB peripherals can be exploited, perhaps it is time to wish that your PC/Server still had a PS/2 port…

Not that I am endorsing them in any way, but IronKey claims their USB devices aren’t vulnerable because they sign their firmware as does Kanguru.
This is also the recommendation made by the researchers who discovered the flaw and presented their findings in the above video.

So, basically until all USB device vendors decide to sign their firmware and only allow their hardware to accept their signed firmware this exploit will be in the wild.

The downside is all devices created before USB device vendors decide to sign their firmware will likely continue to remain exploitable, but on the plus side(?) USB device vendors will get richer!

My guess is within 1 to 2 years some of the more mainstream USB vendors will begin signing their firmware, but it will probably be a few years after that before most are signed by default.

Wednesday, October 01, 2014

CentOS Moodle and SQL Anywhere

So for some reason my place of employment is using a SQL Anywhere database and some time ago someone said, hey let's have this talk to Moodle since the user information is already in there.  
The guy in charge of the technical aspects of our Moodle stuff at the time said, "No!" - He also might have said some other things, but no was definitely the main point.
However, he left, maybe 2 years or so ago.  

Then I got put in charge of Moodle (the back end technical stuff mind you, not the actual courses and content - that's not anything I want to deal with).
I was asked the same thing and decided, "what the hell, I'll try it."

God I wish I hadn't done that.

Anyway, it's been working for a while, basically without issue until about 6 weeks ago and that's when I realized everything I thought I knew was pretty much wrong.

So basically it seems that Moodle doesn't use the native SQL Anywhere PHP module at all. Instead it only uses the native client (in our case on CentOS 6 - used to be CentOS 5) 
Seems it only uses ODBC via ADODB, as best I can tell.

Anyway, I had been going through all this trouble using an older version of PHP to maintain compatibility with the PHP module, now as I know rather needlessly. - I've been wanting to use PHP-FPM, but for some reason couldn't get it working correctly with the module provided by SAP (formerly Sybase).

The other problem I ran into is I couldn't find a newer version of the SQL Anywhere Client and only this one from 2011 http://www.sybase.com/detail?id=1087327 which is mostly because I was searching for Sybase and not SAP - I accidently found this newer one http://scn.sap.com/docs/DOC-35857 and as soon as I installed it all my problems (well a lot of them) went away.

Basically at this point I am just using UnixODBC (or at least the files it creates/uses) along with the Linux Client linked above with some minor tweaks and it kind of works.   Most of the remaining issues are random back end issues, but thankfully other people are fixing those.  

All I can say is if you're thinking about doing this, don't.  
You can, just be aware it is quite a head ache.

I must have spent 6 weeks and countless hours troubleshooting all these seemingly random error message and doing all kinds of crazy things.  

Oh and did I mention we had some 65 vhosts on one server and we were using Apache not nginx.  The server was pretty well spec'd out, 8 cores and 48GB of ram, but it didn't help.  Seems that older Linux client was having some kind of memory leak or the authentication module in moodle talking to it.  Either way the new Linux client for SQL Anywhere made the memory leaks go away, but now I have setup some 56 Virtual machines, (as I discovered some of those vhosts were no longer live/active). They are a bit paltry, 2 cores, 2 GB of Ram and 80 GB of Hard disk space, all running CentOS 6.5, PHP 5.6 Apache 2.2.7 and Moodle 2.7.2+ as of now.  I'm running the SQL Anywhere 12. I am also using the EPEL, remi, CentALT, and RepoForge repositories.

I'd like to eventually switch them all to use PHP-FPM as that is what Moodle seems to recommend.
At this point I don't even have the SQL Anywhere module installed and both enrollment and authentication are set to ODBC and I've run into no real issues. 

At present time these are my only cron entries per server

*/5 * * * * . /root/.bash_profile > /dev/null 2>&1; /usr/bin/php /var/www/html/main/admin/cli/cron.php > /dev/null 2>&1
*/5 * * * * . /root/.bash_profile > /dev/null 2>&1; /usr/bin/php /var/www/html/main/enrol/database/cli/sync.php > /dev/null 2>&1 

I'll explain that shortly.
In /etc/ I have an odbc.ini and an odbcinst.ini the first containing the DSN information and the second the location for the driver



[SQLAnywhere12]
Description = ODBC for Sybase SQL Anywhere 12
Driver = /opt/sqlanywhere12/lib64/libdbodbc12.so
Setup = /opt/sqlanywhere12/lib64/libdbodbc12.so
FileUsage = 1
Trace = yes
TraceFile = /var/log/odbc.log 

Then through trial and error I had to make the following changes/additions to my systems:

Edit web server environment variables to include location of sybase client (For this particular server edit /etc/sysconfig/httpd – see below)
LD_LIBRARY_PATH=./opt/sqlanywhere12/lib64:$LD_LIBRARY_PATH.
export LD_LIBRARY_PATH
/opt/sqlanywhere12/bin64/sa_config.sh
export SQLANY12=/opt/sqlanywhere12
--
Symlink the odbc and odbcinst files to /usr/bin/.odbc.ini and /usr/bin/.odbcinst.ini respectively – I am not sure if this has to be done every time, it may be specific to our server.
ln -s /etc/odbc.ini /usr/bin/.odbc.ini
ln -s /etc/odncinst.ini /usr/bin/.odbcinist.ini
For good measure I also added the following symlinks
ln -s /etc/odbc.ini /var/www/.odbc.ini
ln -s /etc/odbc.ini /sbin/.odbc.ini
ln -s /etc/odbc.ini /usr/sbin/.odbc.ini
ln -s /etc/odbc.ini /bin/.odbc.ini
This lets apache know where to find the odbc settings on our server
--
Symlink the libdbodbc12.so.1 file to libodbc.so.1 and libodbc.so.2 in the /opt/sqlanywhere12/lib64 directory
Be sure to be in the /opt/sqlanywhere12/lib64 directory first
ln -s libdbodbc12.so.1 libodbc.so.1
ln -s libdbodbc12.so.1 libodbc.so.2
This fixes some php errors
--
Symlink the ibdbodbc12_r.so.1 file to libodbc_r.so.1 and libodbc_r.so.2 in the /opt/sqlanywhere12/lib64 directory
Be sure to be in the /opt/sqlanywhere12/lib64 directory first
ln -s libdbodbc12_r.so.1 libodbc_r.so.1
ln -s libdbodbc12_r.so.1 libodbc_r.so.2
This fixes some php errors
--


It was also helpful for testing to add


source /opt/sqlanywhere12/bin64/sa_config.sh


to /root/.bash_profile


Also running source /opt/sqlanywhere12/bin64/sa_config.sh in the terminal helps.
--
Open port 2638 on the system firewalls (in our case iptables)
Add the following line to iptables
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2638 -j ACCEPT
--

Something I forgot to mention the last time, was for this to work you have to symlink /opt/sqlanywhere12/bin64/sa_config.sh to the /etc/profile.d/
ln -s /opt/sqlanywhere12/bin64/sa_config.sh /etc/profile.d/sa_config.sh
be sure to chmod +x /opt/sqlanywhere12/bin64/sa_config.sh first or else it won’t run!
That adds the client config to every logged on user's path. ( I think)
--
On the moodle server under Site Administration -> Plugins -> Authentication -> External Database and Site Administration -> Plugins -> Enrolments -> External Database make the host name matches the DSN the Database type ODBC
DBNAme, DBUser and Password should remain blank as the odbc file store this information.
All the other database stuff, as it pertains to the Table names and fields, isn't related to my part of the setup. Someone else figured that out.
--
Another important step is that SELInux be disabled – it just gets in the way – there is documentation on the CentOS website on how to accomplish this. Now I am sure you can get this working with SELinux enabled, but I was far too lazy to figure that out.
--
Which reminds me that it is a good idea to have strace installed on any system we do this with, it is an invaluable debugging tool.
E.g. ps auxw | grep sbin/httpd | awk '{print"-p " $2}' | xargs strace 2>&1 | grep ini
The above runs a strace on apache this let me know that apache was looking for the odbc ini files in a different location and made the creation of the symlink necessary and resolved the issue.
--
Additionally Google is your friend - most of the time the errors you get tend to be specific but if you search for parts of the error messages or know what keywords to include or exclude you can usually always find someone else who had the same problem and how to fix it.


I don't have a clue

I'm so very tired. It's almost all the time now.