Monday, December 30, 2013

Kumbaya

I promised I'd figure out the skunk in the woodpile of the Ryan-Murray agreement, and I have: The keyword is Discretionary.
They have capped Discretionary spending, but not touched the "Mandatory"
spending: the entitlements. Furthermore, while they may have "capped" expenses, they "capped" them at a higher level than ever before!
Check it out, the references are at the bottom, but here is the BLUF (divided by
eight zeros to put it in terms that humans can understand).
It is as if your home budget were this:
Item 2012 2013 2014
Total spending: $35,380 $34,500 $38,000
Total Revenue: $26,270 $29,020 $30,300
Deficit: $10,890 $5,480 $7,700
Total Debt: $155,660 $170,700 $178,800
You are spending 25% more than you earn.
And you are already $170K in debt.
I suppose it's a good thing that the deficit is about half of last year's, but watch out for the man behind the curtain. We need to dig a lot deeper to find out how they worked that rabbit's foot.
But if it is real then this is indeed a step in the right direction.
But by no means a solution.This deal does let them pretend that they are playing nice, for the Main Stream Media and people who can neither think, nor read, nor calculate but just "feel".
It is definitely a "feel good" solution.
Kumbaya.
http://en.wikipedia.org/wiki/2013_United_States_federal_budget
http://en.wikipedia.org/wiki/2014_United_States_federal_budget
http://nationalpriorities.org/budget-basics/federal-budget-101/spending/

Saturday, November 23, 2013

Multiple Migrations


BLUF: It's easy when you know how.
There are some times that try men's souls, and some are when you have to change your computer architecture or operation system (OS).
This is referred to as migration.
Now, imagine the trials associated with doing both on two integrated computers, all at the same time.
Yikes!
This is how it went:
There is really only one computer (Hewlett Packard tm2t Touchscreen) in this case, with 8 Gigabytes (GB) of Random Access Memory (RAM) and a 500GB Hard Drive (HD) that has been divided ("partitioned") into two parts:
  • 100GB "root" partition (called "/")
  • 400GB "data" partition (called "/data")
A normal computer system comprises two basic physical elements: the physical Hard Drive (HD) and a set of memory chips for Random Access Memory (RAM).
Typically the HD has hundreds of "zerabytes" while the RAM only has 5-10 zerabytes.
The HD may be either an actual physical spinning platter of some sort or a virtual drive comprising zillions of  chips on a super chip.
The RAM is most decidedly an array of chips on a board:
Here, zera is a placeholder for Kilo, Mega, Giga, Peta.... I can't find that it actually is assigned any value, but your mileage may vary. At present we are talking zera = GIGA: One Gigabyte (GB) =109 bytes.
So in today's terms, a system might comprise 500 GB of HD space and 8 GB of RAM. The HD is used for storing information while the RAM is used for executing applications that use the stored information.
But then it gets more complicated, as the HD can be logically separated into several “partitions”, each with its own file structure and contents. Deciding on which directories to put in how many partitions is very much an art, not a science.
The basic idea is to allow disposable or easily reproducible information to reside in one partition and the more precious irreplaceable information to be barricaded somewhere else, so that if the first fails or is corrupted then the latter is protected.
In this representative system:
  • We have one partition (the "/" or "root" partition) containing all the elements associated with the operating system (OS): basic OS, the application files, hardware driver files, and so forth. Especially, files containing the user preferences, or “Settings”.
  • Then we have a second ("/data") partition that contains all the personal data, documents, and other information we want to keep safe. This drive has been encrypted with yet another password.
  • Then there is a bit of hybrid data, such as Settings, that need to live in the / partition to be read by the executing applications but are something of a pain to set up and a pain to lose if you have to reinstall. In this case we:
    • Build them in the / partition
    • Move them to the /data partion and then
    • Provide a pointer (called a symbolic link) in the / partition that points to their location in the /data partition.
The idea is to install the operating system and all the stuff you might want to change or upgrade in the / partition while keeping personal data safe from exploitation or corruption in the encrypted /data partition.
That way, when you upgrade the system you can reformat the / partition and install the new versions there without losing all your personal data:
You'll have to reconstruct the symbolic links, but that is a relatively trivial (easy when you know how) task if you've kept records of your links. See PPPPPP  below...
So, having taken the time and discipline to record the applications, links, and repositories (more on those later) that you are using, it is an easy matter to install a new or reinstall the former OS. You simply reformat the first partition, replacing all its contents, reinstall the applications from your saved sources or updated repositories, and then reconstruct the symbolic links to settings.
Seldom more than perhaps an hour or so of work.
OK, so the first step was to migrate the basic machine from a 32-bit architecture to a 64-bit architecture.
What does this mean?
Architecture refers to the underlying physical structure of the computer. This includes the dynamic Random Access Memory (RAM) chips, where the moment-to-moment execution of program steps takes place, the more static Hard Drive (HD) storage where programs and data are stored (but not executed), and the overall structure of all of these.
Perhaps the most fundamental issue is “How much stuff can you usefully save?”
This depends not only on how big the box might be (RAM and HD size) but also whether you can find it once it is in the box.
Finding stuff on the HD is easy, you simply specify a path for all the directories and subdirectories to the file you seek.
But RAM has specific locations ("addresses"): different sets of transistors in the array of transistors where information starts and stops for a particular record. So you have to know how many addresses you can keep track of and read.
This, in turn, depends on how big your addresses are.
These addresses are defined by a certain number of bits: single states of a group of two-state (binary: on-off) transistors. So dig out your statistics book and calculator:
  • If you have x bits (transistors) per address you can save 2x different combinations of x bits, assuming you have enough RAM.
If you have 32-bit addresses then you can address 4,294,967,296 different (or about 4.3 GB) locations of memory.
But these days, RAM comes in tens of GB, so we need something else. By moving to a 64-bit architecture we can now address ~1.8 x 1019 (18 followed by eighteen digits, actually 18,446,744,073,709,551,616) different addresses, which indeed is a very big number.
Which might be a good thing to do.
So we proceed with this:
  • Download the new operating system as an image of a DVD and burn it to a new DVD-R disk
  • Write and save a record in /data that summarizes:
    • All the programs we needed to reinstall after installing the new OS 
    • All the links we needed to reconstruct
    • Network settings and passwords
    • And, importantly, all the repositories that were being used
               In short, exercised the axiom: Prior planning prevents {} poor performance (PPPPPP)...
  • Put the disk in the DVD drive and reboot the system
  • Make sure it is running well, and, well...
  • Go to the gym.
When we came back the new 64-bit OS was running, just fine.
Next, as expected, we reinstalled the programs and symbolic links from our record in /data.
Reinstalling the programs requires downloading them from the distribution's repositories.
OK, so what is all this blather about repositories?
Repositories are simply that: a place where piles of stuff are being saved. In computer terms they are internet locations where developers upload and "serve" the latest versions of the various applications and other files. And there are zillions of them.
The point is that with repositories you have some semblance of assurance that the programs offered ("served") there have had some sort of oversight and review so that they are neither corrupted nor malware-infected. Not a guarantee, but a stronger assurance. Furthermore, the programs served there have more likely been checked for the other files that are needed for them to run (called "dependencies") so your hassle may be minimized in loading the applications from the repository.
Look: there are some people who like a top-down Bottom-Line-Up-Front approach (guess who) and some who like to work at perfecting the finest details of an object. Thanks be to God for both. The repositories are the domain of the latter. In Free and Open Source Software (FOSS) there are literally thousands, if not millions, of people around the world working through an elaborate cooperative scheme to develop, track, debug, and serve thousands if not millions of different applications, at least for Linux-based systems. These people fix bugs, find corruptions, and generally make the world a better and safer place.
For free. Because they want to and can.
So repositories are your friend. Download software from "official" repositories  whenever possible.
Configuring repositories is yet another art, rather than a science. They are a complex, albeit straightforward, functional decomposition of applications by categories. So if you are interested in a particular type of software, e.g., Education, then there is probably a repository that contains only applications associated with Education.
Topic for another time, but (remember the six Ps) if you record the repositories in the old installation you can then (at least for openSUSE) just change the version:
simply becomes
And so forth for the remaining (23 in my case) different repositories...
=====
So, after all that we were done.
Well, Sort of. It's a bit more complicated.
You see, we're running yet another machine in software inside of this machine. This is called Virtualization:
Virtualization is a technique that allows you to embed one computer inside of another.
A virtualization application, such as Oracle VirtualBox, VMWare, or several others, creates an entirely new computer in software; hence the term “virtual machine” or VM.
The virtualization application (e.g., VirtualBox) resides in the first (/ or root) partition and is executed in RAM. It reads and operates on the virtual machine (VM), which is a rather huge single file that resides in the /data partition of the parent (host) machine and is referred to as the guest machine.
The guest in turn contains one or more HD partitions of its own, each with its own set of an operating system, applications, and user data which can be and usually are completely different from those of the host system. And it has its own RAM, necessarily smaller than that of the host, since it has to use the host RAM resources.
This process of virtualizing a machine within a machine within yet another machine... can continue as long as time (and actual hard disk and RAM space) exist..
(I'm reminded of the Sorcerer's Apprentice... smart man, Walt Disney...)
So, upgrading all this is not a trivial matter:
  • You can choose to upgrade the host system without affecting the embedded (guest) system.
  • You can choose to upgrade the guest without affecting the parent host system.
But there are physical limitations to HD and RAM space, so if you want to upgrade both, then there is a definite cascade of steps that you must take in order to succeed:
  • First upgrade the top level host system
    • The new main operating system (in this case openSUSE 13.1 ) is a 64-bit system, while the older system was 32-bit. So we could not use the normal zypper distribution upgrade process, which simply requires renaming the repositories and issuing a single command:
      • zypper dup 
    • Instead, it required a "clean" install: reformatting the / partition, reinstalling all the applications, and reconstituting the symbolic links. Not a huge effort but, as noted, still more effort than zypper dup.
  • Next upgrade the first level guest system
    • The Windows migration in the virtual machine was from 32-bit Windows XP to 64-bit Windows 7. This required a bit more effort:
      • First we had to increase RAM in the VM from 1GB to 2GB. Merely a click of a button once we knew how, but finding out how took a few hours. First we had to take the machine apart to confirm that there was enough physical RAM present, since free only showed 2GB. It turns out that we were using a 32-bit system without PAE that, for the reasons discussed above, cannot see more than 4 GB.  The VM assumed the host only had 4 GB and would not allow increasing VM RAM beyond 1 GB.
      • Next we had to increase the virtual HD from 20 GB to 40 GB, according to Microsoft's system requirements page. But hello, that required a partitioning tool. Progressive and straight-forward, once you know how, but more time gone, downloading, mounting, and farkling about. You have to first increase the size that VirtualBox allows in the host, then actually go into the guest operating system and increase the partition there as well.
      • Then we cloned the system to ensure we still had a system if all else failed (see PPPPP)
      • This caused a conflict since the new drive had the same UUID as the old drive (duh, look up clone - how do you spell "genetically identical") so there was some more messing about to figure out how to change the UUID.
But eventually we were ready and followed the excellent instructions at
to succeed.
And of course there are always some dirty little secrets that you have to figure out:
  • Every non-Microsoft legacy program that I have tried to reinstall under this Windows 7 migration has thrown an error stating that “this is not a valid Windows32 application.”
It turns out that there is an option for “Troubleshoot Compatibility” evaluation if you right-click the application in a file manager, which in turn reveals the need for (and the automatic installation of) a Windows XP Service Pack 2 patch. Then the application runs quite happily.
So like I said: Easy when you know how.

This is still a work in progress. The touchpad doesn't work under the host system, still other programs, despite the "compatibility" check, don't work in the guest. So we shall update.
But in any case, it has been an "experience".
And not altogether bad.

Monday, November 18, 2013

Mea Culpas, Proverbs, and Sayings: Snopes.com

Oy vey!

I have gone astray (mea culpa, mea culpa, mea maxima culpa) several times recently, by no one's fault but my own. (I and no one else is responsible for my happiness: Atlas Shrugged and many others). 

I have excitedly forwarded information that has been sent to me from trusted sources, but that turned out not to be true.

I try to live by "once burned, twice foolish" but surprisingly googling fails me, offering this only in Somali, but alternatives such as

    http://idioms.thefreedictionary.com/Once+bitten,+twice+shy, which is not quite the same or

    http://idioms.thefreedictionary.com/Fool+me+once,+shame+on+you%3B+fool+me+twice,+shame+on+me, which is closer.


So, thankful and confident that it's an ill wind that blows no good I set about devising a method to make it so dead easy to check Snopes.com that it's even more embarrassing to be caught out for not having checked our sources.

So here it is, as simple and easy as possible, depending on whether you are under Linux or Windows (Sorry Mac folk, you'll have to fend for yourself):

• The basic snopes.com search engine is at http://www.snopes.com/info/search
---------------------------------------------
• Under Linux (since every good Linux geek always has a terminal open) just enter 

     firefox http://www.snopes.com/info/search
    {Enter}
at the command line, or more simply, invoke

    checksnopes

    where you have written a checksnopes BASH script on some path in your .bashrc to the following effect:

        #! /bin/bash
            # Call Firefox
                firefox
http://www.snopes.com/info/search
 
    (Yes, there are many cuter ways to do this, but this is possibly the simplest...)
---------------------------------------------
• Alternatively, under Windows (or Linux, for that matter), open a browser to the site
http://www.snopes.com/info/search and then create a bookmark on your desktop or toolbar (usually by dragging and dropping the URL).
---------------------------------------------
• In either case, doing these will take less time that it took to read this.

    Invoking either of these will open a new Firefox window (or a new Firefox tab, depending on your system settings) zeroed in to the Snopes.com search window. Type in whatever suspect phrase you have and you'll see the answer. 


Chances are, if it's too good to be true, then it isn't... Q.E.D. :-( 


Of course, Snopes is not the final word, there are several others that you can pursue, depending on how much time you have to spend, e.g., http://www.truthorfiction.com/

But the point is, we need to check our sources since we are now engaged in the new journalism.    

---------------------------------------------
Now, if you're really dedicated you can bookmark a phrase like:

    "Yes! I checked with http://www.snopes.com/info/search and it's TRUE!"

that you can then drag and drop into your email.

But automating that is a task for later. After all, tomorrow is another day.

:-)
=========================================================
Update:



An easy solution for the last problem (inserting a saved phrase) for Thunderbird is QuickText, a Thunderbird addon. It saves any number of different little text snippets and provides a button in the composition window. Place the cursor, then click the {Snopes} button and the text is entered:
Very cool.