Showing posts with label work. Show all posts
Showing posts with label work. Show all posts

Sunday, November 05, 2017

One Massive Screw Up

So at work we have some MX servers running Postfix with Mailscanner/SpamAssassin on FreeBSD.
Last week I decided to continue my FreeBSD 10.3 upgrades and decided to tackle all five of these machines at the same time to 11.1.
Everything went well, or so I thought.
Turns out somewhere in the upgrade process when I upgraded all the ports the MailScanner port decided to over-write all the config files and so we would up with a very malfunctioning mail scanner.
Plus due to some other strangeness, the servers were eating up all of their memory.
Well I wasn't sure why, but nailed it down to MailScanner.
So I reinstalled MailScanner and all of its dependencies.
That didn't do anything.
It was next figured out that a configuration file had been renamed, but the original still existed so a quick copy fixed that error.
Then an issue was discovered in the antivirus scanner used by MailScanner, this was resolved by making a quick change to a configuration file.
It was then discovered, that the Bayes database had grown to large to be useful, in excess of 5GB and it was trying to load that into memory.
The database was purged and an automatic process was put into place to purge the database once a week.
Finally, as an extra precaution the main MailScanner configuration file was modified to reduce the maximum number of children processes running at a time.
 This limits the amount of mail that can be processed, but also ensures the memory used didn't exceed what was available
Anyway, all of this kept the mail for being delivered for a few hours.
I don't have anything technical to report about it, just this horrible synopsis.
The bad part is I still have a bunch of servers I need to upgrade.

Oh in coincidental and slightly related news the migration from self-hosted Exchange 2010 to Office 365 is ongoing, but working smoothly.  I guess I shouldn't be surprised.  The documentation is rather exhaustive, both official and otherwise.

I managed to migrated my own mailbox as a "proof of concept" and before I move any others I'll be testing it like crazy.  Of course because I am not changing the mail flow until we're all off Exchange 2010 I need to make sure that the MXs continue working as expected. 

Tuesday, October 24, 2017

I'm a work-a-holic

I don't know a better way to describe my condition than that.

I had the day off to deal with the dryer issue - see my previous post - and I spent a good chunk of the day working, as though it's somehow so important that I can't stop working even on my day off.  That is the biggest lie I think I've ever told myself and I only realized it tonight as I was, you guessed it working.

Now I have a wonderful sweet child and a wife I could've been spending the day with and I did spend some, we went to the zoo which was great because I love the zoo, but what did I do before and after that? Work.  I worked on something on my day off.  Why?  Why would I work for free?  I don't get it anymore.

I think I need to just figure out a way to stop working that would be the first best step, but I don't know what that would be.  Write before I wrote this blog entry I wrote a kind of angry email to my co-workers about stuff that broke today and how I didn't agree with their fix for it.  Of course I wasn't even there so my opinion is invalid and I shouldn't have even been involved because I was off.

I don't know.

I do this all the time too, Christmas, Thanksgiving, even the week I took off to attend my father's funeral in Arlington I worked from there too.  WHY? I have wasted so much time these last 8 years working when I could be doing something else.  God when I think about it like that I really hate myself because I've not only robbed myself of time, but also my family and if I can say so friends, of which I have very few.

I am just going to have to create a mental work block, and put my foot down - perhaps I can come up with something better than that.  My co-workers seem great at not working after work, some of them seem great at not working while they're at work.  Why can't I just not work when I don't need to be.

I think to begin with I'm going to scrap my maintenance windows - because I'm the only one who has EVER done them. I asked for a dedicated maintenance window about two years ago and I got it, but aside from two instances where I got assistance because I asked for it I don't think anyone of my co-workers have ever setup or even thought to setup maintenance on anything.  So why should I?  That would give my back my 3rd Saturday and Thursday nights.

Who am I kidding though I'm already thinking about work.  All the work I need to do tomorrow and all the work I need to do the next day.  I hate myself.

Saturday, October 14, 2017

New shoes.

I bought some new work shoes at Skechers. My old shoes were literally falling apart, not to mention scuffed and uncomfortable because I had worn out the insoles. One of the downsides of being morbidly obese I suppose. The new shoes have memory foam,  which seems to add to their comfort.
While I was there I bought some new tennis shoes, as well, for the gym. They're green, but comfortable. I don't really know what else to say about them. In fact I did over a mile on them last night at the gym. They were also 70% off. Originally $120 and now $36 plus tax of course. We'll see how long they last, but I was comfortable with that price.

Tuesday, October 10, 2017

The Meeting

Sometimes I feel like this at work. It seems like nothing ever happens and meetings don't accomplish anything.


Monday, October 09, 2017

I sometimes wonder why I care...

Twice now in three days there have been work related outages. Both times I was the first to notice, thanks to anag on my phone. I then via text and email notified my coworkers. Hours later they would respond or acknowledge reading the message, because I make it a point to do read receipts.

I just don't know why I try. Apparently I am the only one who really cares or the only one who wants to deal with it. Maybe it's both. I don't know.

I guess I am just frustrated more than anything. My coworkers are either apathetic or I'm trying to hard. Oh well not like anything will ever change. It never does. Meeting after meeting nothing changes. Day after day, week after week, month after month, year after year nothing changes.

Maybes I need to make a change.

Thursday, October 09, 2014

CentOS 6 on a Dell Latitude 2100

So here at work I have a Dell Latitude 2100 from 2009.
Although to be fair it wasn't mine initially I sort of inherited it.
Anyway it's a half decent system, inxi dump below (some information removed):

System: Host: 2100 Kernel: 3.17.0-1.el6.elrepo.i686 i686 (32 bit)
Desktop: N/A Distro: CentOS release 6.5 (Final) 

Machine: System: Dell (portable) product: Latitude 2100
Mobo: Dell model: 0W785N Bios: Dell v: A06 date: 07/30/2010

CPU: Single core Intel Atom N270 (-HT-) cache: 512 KB
Clock Speeds: 1: 1334 MHz 2: 1067 MHz

Graphics: Card: Intel Mobile 945GSE Express Integrated Graphics Controller
Display Server: X.Org 1.16.0 drivers: intel (unloaded: fbdev,vesa)
Resolution: 5280x877@1.0hz
GLX Renderer: NVIDIA GeForce GT 650M OpenGL Engine
GLX Version: 1.4 (2.1 NVIDIA-10.0.43 310.41.05f01)
 
Audio: Card Intel NM10/ICH7 Family High Definition Audio Controller
driver: snd_hda_intel
Sound: ALSA v: k3.17.0-1.el6.elrepo.i686
 
Network: Card-1: Broadcom NetXtreme BCM5764M Gigabit Ethernet PCIe
driver: tg3
IF: eth0 state: up speed: 1000 Mbps duplex: full
Card-2: Broadcom BCM4322 802.11a/b/g/n Wireless LAN Controller
driver: b43-pci-bridge
IF: wlan0 state: up
 
Drives: HDD Total Size: 250.1GB (3.9% used)
ID-1: /dev/sda model: WDC_WD2500BEVT size: 250.1GB

Anyway it took some doing, but the system is working as I want it to, the details of what I did below:

First I added some additional repositories so now I have the following repositories active:
* atomic
 * base
* centosplus
 * elrepo
 * elrepo-extras
* elrepo-kernel
 * epel
 * extras
* fasttrack
 * ius
 * remi
 * rpmforge
 * rpmforge-extras
 * rpmfusion-free-updates
 * rpmfusion-nonfree-updates
 * updates
 * webtatic

Of course after adding all the repos I did yum -y upgrade to ensure everything was as new and fresh as possible.
I did have to exclude gd from the CentALT repository by adding exclude=gd* to the end of the repo file.
I also installed the kernel-ml from the elrepo-kernel repository and modified grub in /etc/grub.conf to make sure it was the default boot kernel.
I mean there isn't anything wrong with the 2.6 kernel used by default, I just wanted a 3.x kernel

chkconfig NetworkManager on
service NetworkManager start
chkconfig network off  
chkconfig wpa_supplicant off

I soon discovered that my wifi wasn't working.
I confirmed this with dmesg.
A google search later led me to here. I just followed the directions and now wireless works flawlessly.

wget http://bues.ch/b43/fwcutter/b43-fwcutter-018.tar.bz2 http://bues.ch/b43/fwcutter/b43-fwcutter-018.tar.bz2.asc gpg --verify b43-fwcutter-018.tar.bz2.asc tar xjf b43-fwcutter-018.tar.bz2 cd b43-fwcutter-018 make sudo make install cd ..

export FIRMWARE_INSTALL_DIR="/lib/firmware" wget http://www.lwfinger.com/b43-firmware/broadcom-wl-5.100.138.tar.bz2 tar xjf broadcom-wl-5.100.138.tar.bz2 sudo b43-fwcutter -w "$FIRMWARE_INSTALL_DIR" broadcom-wl-5.100.138/linux/wl_apsta.o

modprobe -r b43 bcma

modprobe b43

I made sure everything stuff with a reboot and as expected it did.

The main downside of the 2100 is the 1024x600 resolution. In an effort to set some stuff up and get around this I decided to enable X11 forwarding.
This allowed me test what I did next on my macbook pro which actually worked quite well. 

Browsers and Plugins were next on the agenda, firefox is included by default, but I wanted Chrome.
Unfortunately Google decided that Chrome and CentOS 6 weren't gonna be friends anymore.
I can't run CentOS 7 as it is x86_64 only and this atom isn't.
Anyway after some searching around the Google I found chromium will do what I want so I set out to install it.
    
sudo -i
yum localinstall http://install.linux.ncsu.edu/pub/yum/itecs/public/chromium/rhel6/noarch/chromium-release-1.1-1.noarch.rpm
cd /etc/yum.repos.d wget http://people.centos.org/hughesjr/chromium/6/chromium-el6.repo
yum install chromium

I had already done an ssh -Y to my 2100 from my mac and set out to test that it worked with
/opt/chromium/chrome-wrapper %U

So next step was flash
rpm -ivh http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-adobe-linux

yum -y install flash-plugin nspluginwrapper alsa-plugins-pulseaudio libcurl

In Firefox about:plugins showed it was installed, but unfortunately there was still no flash support in Chromium.
mkdir /tmp/working/
cd /tmp/working/
wget https://dl.google.com/linux/direct/google-chrome-stable_current_i386.rpm
rpm2cpio google-chrome-stable_current_i386.rpm | cpio -idv
mkdir /opt/chromium-browser/PepperFlash/
cp opt/google/chrome/PepperFlash/* /opt/chromium-browser/PepperFlash/

Restart chromium and flash works too!!

The next step was Adobe Reader (yes I hate myself, I know there are lots of PDF readers, but I wanted this one).
sudo -i
cd /tmp wget http://ardownload.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i486linux_enu.rpm 
yum localinstall AdbeRdr9.5.5-1_i486linux_enu.rpm
yum install nspluginwrapper.i686 libcanberra-gtk2.i686 gtk2-engines.i686 PackageKit-gtk-module.i686
yum localinstall AdobeReader_enu nspluginwrapper.i686

Then run acroread to open it and accept the EULA.
If you want your browsers to see it you have to copy some files:
cp /opt/Adobe/Reader9/Browser/intellinux/nppdf.so /usr/lib/mozilla/plugins/ 

Next up working Java plugin
Downloaded the RPM and followed their install instructions

Become root by running su and entering the super-user password.
Uninstall any earlier installations of the Java packages.
rpm -e <package_name>
Change to the directory in which you want to install. Type:
cd <directory path name>
For example, to install the software in the /usr/java/ directory, Type:
cd /usr/java

Install the package.
rpm -ivh jre-7u7-linux-i586.rpm



To configure the Java Plugin follow these steps:
Exit Firefox browser if it is already running.
Create a symbolic link to the libnpjp2.so file in the browser plugins directory
Go to the plugins sub-directory under the Firefox installation directory
cd <Firefox installation directory>/plugins

Create plugins directory if it does not exist.
Create the symbolic link

ln -s <Java installation directory>/lib/i386/libnpjp2.so



Then because I don't already hate myself enough I installed real player
wget http://client-software.real.com/free/unix/RealPlayer11GOLD.rpm
rpm -ivh RealPlayer11GOLD.rpm
realplay

I also installed VLC because it met all the other media dependencies I wanted installed.
yum -y install vlc

There were only two other packages I needed installed at this point SecureCRT and OwnCloud client.
That was just a matter of downloading the rpms and manually installing them.
I use OwnCloud to share my SecureCRT between PCs and I love that SecureCRT lets me access all my remote hosts regardless of my OS. I mean sure any terminal will do for SSH connections, but the convenience of SecureCRT is something I appreciate.

In case you were wondering I was using a pearson site to test all my browser plugins. This was a site I stumbled upon in my college days and it surprisingly still exists.

Wednesday, October 01, 2014

CentOS Moodle and SQL Anywhere

So for some reason my place of employment is using a SQL Anywhere database and some time ago someone said, hey let's have this talk to Moodle since the user information is already in there.  
The guy in charge of the technical aspects of our Moodle stuff at the time said, "No!" - He also might have said some other things, but no was definitely the main point.
However, he left, maybe 2 years or so ago.  

Then I got put in charge of Moodle (the back end technical stuff mind you, not the actual courses and content - that's not anything I want to deal with).
I was asked the same thing and decided, "what the hell, I'll try it."

God I wish I hadn't done that.

Anyway, it's been working for a while, basically without issue until about 6 weeks ago and that's when I realized everything I thought I knew was pretty much wrong.

So basically it seems that Moodle doesn't use the native SQL Anywhere PHP module at all. Instead it only uses the native client (in our case on CentOS 6 - used to be CentOS 5) 
Seems it only uses ODBC via ADODB, as best I can tell.

Anyway, I had been going through all this trouble using an older version of PHP to maintain compatibility with the PHP module, now as I know rather needlessly. - I've been wanting to use PHP-FPM, but for some reason couldn't get it working correctly with the module provided by SAP (formerly Sybase).

The other problem I ran into is I couldn't find a newer version of the SQL Anywhere Client and only this one from 2011 http://www.sybase.com/detail?id=1087327 which is mostly because I was searching for Sybase and not SAP - I accidently found this newer one http://scn.sap.com/docs/DOC-35857 and as soon as I installed it all my problems (well a lot of them) went away.

Basically at this point I am just using UnixODBC (or at least the files it creates/uses) along with the Linux Client linked above with some minor tweaks and it kind of works.   Most of the remaining issues are random back end issues, but thankfully other people are fixing those.  

All I can say is if you're thinking about doing this, don't.  
You can, just be aware it is quite a head ache.

I must have spent 6 weeks and countless hours troubleshooting all these seemingly random error message and doing all kinds of crazy things.  

Oh and did I mention we had some 65 vhosts on one server and we were using Apache not nginx.  The server was pretty well spec'd out, 8 cores and 48GB of ram, but it didn't help.  Seems that older Linux client was having some kind of memory leak or the authentication module in moodle talking to it.  Either way the new Linux client for SQL Anywhere made the memory leaks go away, but now I have setup some 56 Virtual machines, (as I discovered some of those vhosts were no longer live/active). They are a bit paltry, 2 cores, 2 GB of Ram and 80 GB of Hard disk space, all running CentOS 6.5, PHP 5.6 Apache 2.2.7 and Moodle 2.7.2+ as of now.  I'm running the SQL Anywhere 12. I am also using the EPEL, remi, CentALT, and RepoForge repositories.

I'd like to eventually switch them all to use PHP-FPM as that is what Moodle seems to recommend.
At this point I don't even have the SQL Anywhere module installed and both enrollment and authentication are set to ODBC and I've run into no real issues. 

At present time these are my only cron entries per server

*/5 * * * * . /root/.bash_profile > /dev/null 2>&1; /usr/bin/php /var/www/html/main/admin/cli/cron.php > /dev/null 2>&1
*/5 * * * * . /root/.bash_profile > /dev/null 2>&1; /usr/bin/php /var/www/html/main/enrol/database/cli/sync.php > /dev/null 2>&1 

I'll explain that shortly.
In /etc/ I have an odbc.ini and an odbcinst.ini the first containing the DSN information and the second the location for the driver



[SQLAnywhere12]
Description = ODBC for Sybase SQL Anywhere 12
Driver = /opt/sqlanywhere12/lib64/libdbodbc12.so
Setup = /opt/sqlanywhere12/lib64/libdbodbc12.so
FileUsage = 1
Trace = yes
TraceFile = /var/log/odbc.log 

Then through trial and error I had to make the following changes/additions to my systems:

Edit web server environment variables to include location of sybase client (For this particular server edit /etc/sysconfig/httpd – see below)
LD_LIBRARY_PATH=./opt/sqlanywhere12/lib64:$LD_LIBRARY_PATH.
export LD_LIBRARY_PATH
/opt/sqlanywhere12/bin64/sa_config.sh
export SQLANY12=/opt/sqlanywhere12
--
Symlink the odbc and odbcinst files to /usr/bin/.odbc.ini and /usr/bin/.odbcinst.ini respectively – I am not sure if this has to be done every time, it may be specific to our server.
ln -s /etc/odbc.ini /usr/bin/.odbc.ini
ln -s /etc/odncinst.ini /usr/bin/.odbcinist.ini
For good measure I also added the following symlinks
ln -s /etc/odbc.ini /var/www/.odbc.ini
ln -s /etc/odbc.ini /sbin/.odbc.ini
ln -s /etc/odbc.ini /usr/sbin/.odbc.ini
ln -s /etc/odbc.ini /bin/.odbc.ini
This lets apache know where to find the odbc settings on our server
--
Symlink the libdbodbc12.so.1 file to libodbc.so.1 and libodbc.so.2 in the /opt/sqlanywhere12/lib64 directory
Be sure to be in the /opt/sqlanywhere12/lib64 directory first
ln -s libdbodbc12.so.1 libodbc.so.1
ln -s libdbodbc12.so.1 libodbc.so.2
This fixes some php errors
--
Symlink the ibdbodbc12_r.so.1 file to libodbc_r.so.1 and libodbc_r.so.2 in the /opt/sqlanywhere12/lib64 directory
Be sure to be in the /opt/sqlanywhere12/lib64 directory first
ln -s libdbodbc12_r.so.1 libodbc_r.so.1
ln -s libdbodbc12_r.so.1 libodbc_r.so.2
This fixes some php errors
--


It was also helpful for testing to add


source /opt/sqlanywhere12/bin64/sa_config.sh


to /root/.bash_profile


Also running source /opt/sqlanywhere12/bin64/sa_config.sh in the terminal helps.
--
Open port 2638 on the system firewalls (in our case iptables)
Add the following line to iptables
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2638 -j ACCEPT
--

Something I forgot to mention the last time, was for this to work you have to symlink /opt/sqlanywhere12/bin64/sa_config.sh to the /etc/profile.d/
ln -s /opt/sqlanywhere12/bin64/sa_config.sh /etc/profile.d/sa_config.sh
be sure to chmod +x /opt/sqlanywhere12/bin64/sa_config.sh first or else it won’t run!
That adds the client config to every logged on user's path. ( I think)
--
On the moodle server under Site Administration -> Plugins -> Authentication -> External Database and Site Administration -> Plugins -> Enrolments -> External Database make the host name matches the DSN the Database type ODBC
DBNAme, DBUser and Password should remain blank as the odbc file store this information.
All the other database stuff, as it pertains to the Table names and fields, isn't related to my part of the setup. Someone else figured that out.
--
Another important step is that SELInux be disabled – it just gets in the way – there is documentation on the CentOS website on how to accomplish this. Now I am sure you can get this working with SELinux enabled, but I was far too lazy to figure that out.
--
Which reminds me that it is a good idea to have strace installed on any system we do this with, it is an invaluable debugging tool.
E.g. ps auxw | grep sbin/httpd | awk '{print"-p " $2}' | xargs strace 2>&1 | grep ini
The above runs a strace on apache this let me know that apache was looking for the odbc ini files in a different location and made the creation of the symlink necessary and resolved the issue.
--
Additionally Google is your friend - most of the time the errors you get tend to be specific but if you search for parts of the error messages or know what keywords to include or exclude you can usually always find someone else who had the same problem and how to fix it.


I don't have a clue

I'm so very tired. It's almost all the time now.