Judging vs Perceiving Dominant Types

Life and Self No Comments

In a previous post, MBTI: Not Misleading, Just Misunderstood, I mentioned judging and perceiving dominant functions. I was talking to a friend and believe that I have stumbled upon a good way to differentiate between the two. Consider judging and perceiving in terms of communication.

smurf-devilvsangelJudging dominant types are almost always judging what they hear and what they say, but their judgments are works in progress. Such types will give clues, verbal and nonverbal, as to the degree of certainty in what they are saying. They would say something like “In most cases, I think that this is true.” And the progression over time is that the judgments will grow from less certain to very sure. Eventually, they will say, “This is true.” Some things are not judged, some start with no previous judgments, and others have strong judgment immediately (the result of similar decisions in the past). As judgers examine and re-examine, they re-judge until they reach a point where they are strongly certain.

Conversely, perceiving dominant types do not immediately judge unless they have made similar decisions in the past. Without prior judgments, they wait until the very end to decide. Unfortunately, they (especially dominant perceivers with thinking secondary) usually do not give any clue (verbal or nonverbal) as to the level of certainty of their statements, mainly because they have not decided anything as of yet. They would say, “This is false”, and say it with no, little, some, or total certainty; a listener can’t be sure which. Perceivers wish to explore all the possibilities first before deciding on one. What they state may not be certain at all because no judgment may be attached.

The two different styles result in conflict when they communicate with each other. Judging dominants are continuously judging and communicating their level of certainty. Early on or midway through the discussion, they may say “I believe this is so but I may be wrong”. Perceiving dominants instead will state “This is so” without indicating any level of certainty.

Each type believes they are communicating with their own type. Unfortunately, this is may not be the case. A judging dominant would believe that a perceiving dominant is very certain (because there is no verbal qualifier to the statement made) and attempt to figure out why the perceiver believes his statement to be true. A judger would then ask questions and come up with exception cases. The perceiver, who is “just throwing it out there”, is wondering why the judger is questioning him about what he said and attempting to close off possibilities prematurely. The perceiver first responds by trying to answer the judger’s questions, quickly gets annoyed when the questions persist, and start throwing out other statements or possibilities in reply to the exception cases. The judger gets vexed because he views the perceiver as not willing to explain why, making tangential statements that may conflict with each other, and changing judgments randomly. The end result is a communication breakdown and irritation with each other.

The above situation is made worse if any of the two types do not possess high emotional maturity and strong self-esteem. A perceiving dominant would feel under attack by the judging dominant. The perceiver would wonder why the judger is questioning him. A judging dominant would feel that the perceiver is not being serious, being disrespectful and making fun of him. Negative emotions are mixed into the irritation cycle and can build up to eruptions in anger with each other.

Perhaps a better understanding can be arrived at when we consider how each type brainstorms. When brainstorming, judging dominants will make continual judgments that they refine using exception cases. Judgers will examine the feasibility of a possibility before moving on the next one; judgers explore depth first. Perceiving dominants wish to explore all the possibilites before determining any possiblities’ feasibility; perceivers explore breadth first. Seeking refinement, judgers will look for exceptions to what is suggested by a perceiver. The perceiver may feel that is too premature to close off the possibility by questioning. Because statements made by perceivers are viewed as very certain, judgers may feel that it is too premature to make such judgments so soon. Brainstorming becomes an unpleasant experience for both types.

To make my point, I have painted the two types in their extremes. Both types make judgments. For judging dominants, the judgment is spread over the whole process. For perceiving dominants, the judgment is compressed to the end. Unfortunately, a non-judgmental statement said by a perceiving dominant sounds like a definitive statement to a judging dominant. A definitive statement said by a judging dominant sounds like a non-definitive statement to a perceiving dominant. These misunderstandings lead to a communication cycle which will frustrate both types.

The cycle can only be broken if at least one of the types do not assume that they are talking to their own type and make the effort to determine the type they are speaking to. Judging dominant types should not ascribe certainty of judgment to statements made without qualifiers by perceiving dominant types. Absent any expression of degree of certainty, the judger should ask directly whether what was stated is a definitive judgment. Perceiving dominant types should ask if there is any certainty to another’s statement, instead of assuming none, and if possible, provide verbal indicators as to the certainty of their own statements.

When you find yourself getting irritated in a discussion, stop and ask yourself whether your conversation partner is of a different type. Adjust accordingly and with patience, you will communicate and feel better and so will the other person.

No Comments

Real Insurance Is Better than AppleCare

Work and Money No Comments

Recently, my brother-in-law decided to buy a 13 inch Apple Macbook Pro with Retina. He asked me if the AppleCare Protection plan, which costs $249 extra, was worth it. I told him no. For the same amount of money or less, one could get a better protection plan than AppleCare.

broken-mask-robotThe AppleCare Protection plan is a 3 year extended warranty plan. It only covers malfunctioning parts. It does not cover accidental damage, loss, or theft. If you drop the Macbook and the display cracks or the laptop stops working, you are out of luck because AppleCare does not cover that. You will need to pay the full repair price, which could be $1000 or more to replace a retina display. If you spill water on the keyboard and your Macbook shorts out, that’s too bad. If you lose the Macbook or someone steals it from the safety of your house, oh well, that’s the way the cookie crumbles. AppleCare does not cover any of that.

What does AppleCare cover? Well, if your keyboard or display malfunctions through no fault of yours, then Apple will repair or replace that component. The Genius Bar members at your local Apple Store will check the Macbook for damage, such as large dents, that could cause the malfunction. If they find such damage, they can refuse the repair; if you are very lucky, you will get someone nice enough to allow the free repair. Be aware that Apple has put moisture detectors inside the Macbook so that if you spill water on it, the Genius Bar will know and can refuse the repair even if you have AppleCare Protection.

There is a reason why extended warranties are pushed so often by retailers. They make a lot of money off of them. Usually if a product such as a laptop were to fail, it would most likely fail during the first year when the product is still under the standard one year warranty. So if the customer pays for an extra year or two of warranty, that is considered an almost guaranteed 100% profit for the retailer. Not to say that there isn’t any case where a day or two after the one year warranty expired, the product failed and the owner was glad to have paid for extended warranty. That case is the exception though. The odds suggest not buying an extended warranty.

When most people buy AppleCare, I think they believe that they are buying insurance. They aren’t. Insurance could cover repair or replacement due to accidental damage, loss, or theft. AppleCare is not insurance. For the same amount of money, they could get real insurance that provides greater peace of mind.

First, before we talk insurance, make sure that you buy that expensive Macbook with an American Express credit card. (If you don’t have an American Express card, get one. There are basic American Express cards with no annual fee that provide the two benefits below.) Purchasing an item with an American Express card provides the following two benefits for free:

  • Purchase Protection: your purchase is protected for up to 90 days from the date of purchase from accidental damage or theft. You will be reimbursed up to $1000. (If you have an American Express Platinum card or similar, you get protection from loss also and up to $10,000.)
  • Extended Warranty: doubles the warranty on your purchase up to an additional year. For an extra year after the original warranty expires, if the purchase malfunctions through no fault of yours, you will be reimbursed the original cost (up to $10,000).

Purchasing the Macbook with an American Express credit card will add one additional year of extended warranty on top of the standard Apple one year warranty for free. It is a no brainer to do so. American Express service is very friendly. For example, the Wifi feature on my sister’s iPhone 4s broke (due to Apple hardware defect) after she upgraded to iOS 7. Because it was past the one year warranty, Apple wanted $200 to replace the iPhone. I told my sister to call American Express to see if she was eligible for the free extended warranty. She was. They asked her to send them the receipt and then gave her a credit for the original cost of the iPhone. How cool is that?

Second, before buying insurance for your Macbook, check that you don’t already have it. You may have a rider on your house or rental insurance that covers your personal property such as electronics. Check to see what is covered. For example, my renter’s insurance covers theft and loss of personal property due to fire (burst pipes, etc.). Unfortunately, accidental damage is not covered. In the case of theft or loss, my rental insurance company will reimbursed me for the deprecated cost or if I purchase a replacement, they will cover the original cost. (This is nowhere as nice as American Express’ full credit of the original cost without requiring you to buy a replacement.)

If you don’t have a rider for personal property like electronics, you may want to ask your home insurance company about one. It may be the cheapest option because a rider is considered part of the bundle and you may get a better deal that way. Homeowners on forums provided some examples such as $40/year for $2000 coverage (much less than AppleCare) or $80/year for $5000 coverage for malfunctions, accidental damage, loss, and theft. The insurance cost seems to vary widely (one homeowner mentioned $15/year, another $30/year for $4000 coverage, and a third mentioned a deductible of $50).

Instead of a rider, you can purchase personal property insurance directly. Most likely, standalone insurance will be more expensive than a rider. The most recommended standalone insurance for a Macbook is an Inland Marine Insurance policy (can be gotten from several insurance companies like AllState, State Farm, and Farmers Insurance). Though originally created to cover expensive electronics on boats, the policy applies for land usage also. In a forum post (Best Insurance on Earth for your MacBook / Air etc. Far better than AppleCare), one person mentioned that it costs about $32/year for $1500 coverage of multiple devices with no deductible. (It was mentioned also that the Inland Marine Insurance policy did not cover phones.)

Another insurance mentioned was State Farm’s Personal Articles Policy, which costs $60/year for $3000 coverage. There are also insurance companies, like Safeware, that sell policies specific to high-end equipment and electronics. As with all other types of insurances, make sure to shop around to get the best deal.

When talking about insurance for electronics, Square Trade is a name that often comes up. Square Trade costs about the same as AppleCare; but in addition to the extended warranty, Square Trade covers accidental damage. Because Square Trade does not cover loss or theft, I believe that it is not the best deal. Square Trade is better than AppleCare but you can get better insurance than both for less.

Insurance is personal. I do not purchase extra insurance, such as extended warranties. (I do use an American Express card to take advantage of the free extra year of warranty though.) I am gambling that I’ll be careful enough not to break my Macbook, lose it, or be robbed. Considering all the money I have saved from not buying extra insurance or extended warranties on my many laptops, even if I have to pay full price to replace a Macbook, I will break even or come out ahead. However, if you feel more comfortable having some protection (nothing wrong with that), please consider the alternative options above to AppleCare. You will get much more bang for your buck.

No Comments

Update to Latest Subversion Using MacPorts

Mac OS X No Comments

Because I make use of MacPorts to install my developmental tools on Mac OS X, installing or updating Subversion is simple, consisting of a single command line.

Install MacPorts

If you don’t already have MacPorts, go ahead and install it. MacPorts depends upon Xcode and the Xcode Command Line Tool. Instructions for installing both are provided on the MacPorts website.

Update MacPorts

Before installing or updating Subversion, you will want to update MacPorts by issuing this command:

sudo port -v selfupgrade

Install Subversion

To install Subversion, issue a MacPorts command to install it like so:

sudo port install subversion subversion-javahlbindings

The Subversion JavaHL Bindings (“subversion-javahlbindings”) package is necessary to support integration with Eclipse, specifically using the Subclipse plugin. Thank heaven that MacPorts got around to supporting the Subversion JavaHL Bindings installation. Before, I had to manually find a compatible version of the JavaHL Bindings, download, and install it myself.

Note: When installing the Eclipse Subclipse plugin, you will need to select the specific Subclipse version that uses a Subversion version that is the same as your installed Subversion and JavaHL Bindings. The version numbers don’t match so you will need to look at the Subclipse documentation to determine which version of Subclipse to install. For example, Subclipse 1.10 uses the latest Subversion 1.8.

Update Subversion

You can update Subversion specifically or update all outdated MacPorts-installed packages by issuing these commands:

# Update only Subversion and JavaHL Bindings
sudo port –v upgrade subversion subversion-javahlbindings

# Update all outdated installed packages including Subversion
sudo port -v upgrade outdated

Install or Update Subclipse

To install or update the Eclipse Subclipse plugin, you will use the same installation instructions. Subclipse doesn’t have a separate update mechanism. To update Subclipse, you would basically install a newer version of it (without needing to remove the older version first).

Note: Eclipse has a menu item, Help->Check for Updates, which will update itself and supported plugins; unfortunately, Subclipse does not support this function.

To install or update Subclipse, follow these steps:

  1. Go to Eclipse menu: Help->Install New Software…
  2. Input “http://subclipse.tigris.org/update_1.10.x” into the “Work with” field and the table will be updated with installation packages available at that location. (Note: Subclipse 1.10 uses the latest Subversion 1.8.)
  3. Check just the Subclipse package and keep clicking Next until the end. Half-way through, you will be asked to accept the license agreement. Select the “I accept the terms of the license agreements” radio button and click Finish.
  4. You will get a security warning popup with the message, “Warning: You are installing software that contains unsigned content.” Click the OK button to proceed.
  5. Eclipse will need to restart. You will be prompted with a “Software Updates” popup asking “You will need to restart Eclipse for the changes to take effect. Would you like to restart now?” Answer Yes.

Use Older Subversion

MacPorts allows you to select an older version of its packages for use, instead of using the latest version. This is useful in case you do an update and realize that you can’t use the latest version of a particular package, perhaps due to software version incompatibility with one of your tools or applications. For example, because the latest version of Subclipse may not support the latest version of Subversion, you may need to force the use of the previous version of Subversion.

To see all the installed versions of Subversion, run this command:

sudo port installed | grep -i subversion

You should see something like the following output:

subversion @1.7.8_2
subversion @1.7.10_1
subversion @1.8.8_0 (active)
subversion-javahlbindings @1.7.8_2
subversion-javahlbindings @1.7.10_0
subversion-javahlbindings @1.8.8_0 (active)

To activate the previous version of Subversion, use these commands:

sudo port activate subversion @1.7.10_1
sudo port activate subversion-javahlbindings @1.7.10_0

If you are using the latest Subversion and want to uninstall all the older versions, run either of these commands:

# To uninstall a specific version of Subversion
sudo port uninstall subversion @1.7.10_1
sudo port uninstall subversion-javahlbindings @1.7.10_0

# To uninstall inactive versions for all packages including Subversion
sudo port uninstall inactive

I’m very glad that MacPorts exist to make installations and updates so painless.

No Comments

Send Me Money, Sucker!

Internet No Comments

Recently, I got an email from a family member which reads as follow:

From: XXXX@yahoo.com
Subject: Urgent!

Hi,

I'm out of town suffering a terrible incident, I need your urgent favor,
Please email me back as soon as possible.

Thanks.

XXXX
(XXX) XXX-XXXX

The displayed email address looks correct as XXXX@yahoo.com, but when I check the headers, the reply-to address is XXXX@outlook.com. And the phone number has the wrong area code.

I recognized it as the money gram scam. Basically, if you reply to that email, you will receive a request to send money by Western Union (or a similar money transfer service), where it is easy for anyone to go and pick up the cash. (If you call the number, you will probably get voicemail.) The way this scam works is to hack into someone’s email account, send this same message to everyone in the address book, and hope that one or two people will fall for it and send money.

ScroogeMcDuckI sent a warning to my family and wasn’t surprised to find that most did not recognize this email as a scam. They were confused or thought it was a joke. The family member, whose email account was hacked, disclosed that several friends and acquaintances were calling to ask why he needed $930. This tells me that a lot of folks are not knowledgeable about Internet scams. I want to talk about scams, Internet and otherwise, and the one method that I use to fight them.

In the past, this and other frauds were perpetrated by isolated con artists. Nowadays, I believe that most of the scams on the Internet are perpetrated by criminal organizations. If I was a mafia boss, I would definitely have an Internet racket because face it, you can make a ton of money (from the hundreds of millions of victims) with very little risk of getting caught or punished, especially if you are located in another country.

So, there are groups of hundreds of criminals, backed by the best servers that dirty money can buy, running scams across the Internet (and elsewhere). They are working full-time to steal money from you and the companies you do business with. If they are truly International, they may be working full time across multiple time zones, while you are sleeping, eating, going to the bathroom, and watching TV.

You may throw up your hands in defeat at this point. And to be truthful, I agree. There is no way you can beat everything that an organization like that can throw at you. The best you can aspire to be is a potential victim that would take too much effort to defraud. Sad to say, your goal is to be less naive than the masses. Or more simply, the criminals will go for the lowest-hanging fruit and your job is to avoid being the lowest-hanging fruit.

The most powerful tool that we victims have in our arsenal is to “trust but verify” or more accurately, verify before trusting. This applies to almost everything in life. To illustrate, one of my friends did fall for the money gram scam above a couple years ago. After sending the money, she had some doubts so she called the friend up and the friend replied, “What? I’m not in XXXX country, robbed of everything, and in need of money!” My question is: Why didn’t she call up the friend or the friend’s family first before sending money? If she had verified first, the friend or the friend’s family would have told her that the email was a fake.

Email Links: Bad Idea

Avoid clicking on any links in an email, especially an email from your bank. Definitely, do not login if the link takes you to a login page where you are prompted to input your username and password. Instead, open up a browser and manually type in the address of your bank or whatever.

If you’re lucky, clicking on links indiscriminately may get your computer infected with a virus or spyware which will just slow down your computer. If you’re unlucky, a virus will erase your hard drive or a spyware will record what you type, like passwords, and transmit the data to someone who doesn’t have your best interest in mind. Worst, if you click on a link to your bank account and input your username and password, you may have just given access to your bank account to a criminal.

The last is referred to as phishing (pronounced like “fishing” because they are “phishing” for your money) which involves pretending to be a trustworthy entity in order to acquire sensitive information. Basically, someone nefarious creates a website which looks exactly like your bank’s login page. They send you a fake email from your bank with a link. When you click on the link, you are taken to the fake login page. After you input your banking username and password, they could then forward you to the real bank or just throw an error that maintenance is in progress. In the meantime, they have your username and password to access your bank account with.

Phishing may be used to gain access to accounts belonging to other companies than your bank, like investment firms, credit card companies, loan application processors, mortgage payment companies, etc. I believe that all legitimate businesses should make it a policy to not include any links in their official emails; instead, they should ask their users to manually browse to their company websites.

Note: If you receive a complicated link in an email, perhaps pointing to a specific Google or Yahoo photo album, which requires a login and you can’t figure out how to manually browse to it, here’s what you can do:

  1. Browse to the company address by manually typing it in, and log into your account.
  2. Go back to the email and click on the link.

If the link is legitimate, the system will recognize that you are already logged in and bypass the login screen. You would then go directly to that page; that is, the photo album. Doing the above will help you to avoid being tricked by a phishing website.

Phone Calls: Just Hang Up

Similar to the above, if you get a phone call from your bank and are asked to verify your identity, ask what the call is above, say bye-bye, and call your bank’s official phone number (listed on the back of your ATM card, their website, or in the phone book). Calling them directly is the equivalent of manually browsing to the company website. If the “bank” calls you and you provide your verification info (mother’s maiden name, social security, etc.), you may have just given your identity away to thieves, who could then gain access to your accounts or more likely, open a new credit card or loan in your name.

Knowing the above, the perpetrators will attempt to override your caution. A year ago, I got a phone call from my credit card company. They told me that they believe my credit card number was stolen because they were seeing charges for flowers amounting to over a thousand dollars in Florida. They asked me to verify my identity so they can confirm that the charges were fraudulent. Of course, I answered every question they asked. Afterwards, I realized with horror that I might have just given the keys to my identity away to someone who “called” me on the phone. Thankfully it was a legitimate call, but it could have easily been a trick. What I should have done was ask them what the call was about, hang up, and call the credit card company back directly.

Phishing: Old as the Pharaohs

The above could have been a phishing attempt. Phishing isn’t something new on the Internet; it has been around for a long time. I’m sure it has been around since mankind first discovered how to cheat and steal. I think all effective scams involve the use of phishing (again, pretending to be a trustworthy entity) because no one hands their money to some entity they don’t trust.

For example, suppose that you are on a business trip. You arrive late at the hotel. You’re hungry but too tired to go out. Conveniently, there is a flyer for pizza delivery that someone slipped under the hotel room’s door. You dialed up the pizza place, make an order, and pay with your credit card. An hour later, the pizza hasn’t arrived yet. You call back and get some lame excuse like the oven has exploded, sorry, but there won’t be pizza for anyone. Or maybe no one picks up. Congratulations, you’ve just had your credit card number stolen.

Remember what P.T. Barnum supposedly said, “There’s a sucker born every minute.” Try not to be that sucker. But if you fall for a scam (which I must embarrassingly admit to once or twice), forgive yourself. You are only human. Just repeat to yourself, “There’s a human born every minute.” (To be exact, there’s a human born every 8 seconds.)

No Comments

The Internet’s Future is Blacklisted

Internet No Comments

Over the weekend, I signed up for a shared web hosting plan because of a special deal. I spent a day setting up the host, migrating a website, and testing to make sure it worked. On Monday, when I went to work, I thought I would check to see the status of my website. Imagine my surprise when I got a security warning that my website was dangerous, known to host viruses and spyware. How could this be? This is a respectable website which I have just moved to a new server.

senderbase_bad_ipIt turns out that my work’s Intranet is protected by a network security appliance called Ironport. Ironport in turn depends upon SenderBase, a blacklist service that identifies dangerous websites. The blacklist is keyed off the IP address. The new server’s IP address was flagged and thus, anything hosted on it (like my website) inherits the negative status.

When we get a shared web hosting account, we are assigned one of the servers which have available capacity. Now, why would that server have excess capacity? Perhaps, it is because a previous user was kicked out for bad behavior, like distributing viruses, spyware, or spam. Well, that someone’s bad behavior got the IP address blacklisted. And now, I am the proud owner of that banned IP address.

Note: The above doesn’t just apply to shared web hosting. If you get a private server or virtual private server, the provider company will give you an available IP address. That IP address could have belonged to someone previously who had misbehaved.

So maybe I and others whose companies use network security appliances can’t browse to my website. So what, we’re supposed to be working, right? Unfortunately, it turns out that email is also affected. If you expect to send and receive mail using your server, the server’s blacklisted IP address could cause all the email traffic to and from your server to get bounced (not delivered).

Worse, as far as I can tell, once the IP address is blacklisted, it is very hard to get that status removed. You’ll have to hope that your hosting provider is motivated enough to go through the hassle of engaging one or more blacklisting companies to remove that negative status. Even if your provider is willing, it will take time before the IP address is cleared.

Having learnt my lesson, the first thing I suggest doing after getting a web hosting or private server account is to check that its IP address is not blacklisted. You can check the IP address on the following websites:

Note: Not all of the blacklists are widely used, so it may be okay for the IP address to be on one or two blacklists. However, to be on the safe side, it is best to have an IP address which doesn’t appear on any blacklist.

If your IP address is blacklisted, ask your hosting provider company for another. If the company won’t accommodate you, then cancel and go with one that will. Believe me, doing so will avoid a lot of wasted effort and work. You don’t want a customer browsing to your company website only to get a stern warning that your website is known to distribute viruses.

I am afraid that I am seeing the future of the Internet. As security concerns grow, companies will invest in solutions, like network security appliances, that make use of blacklists (and maybe whitelists). Heck, if I was in charge of my company’s network security, a network security appliance would be the minimal that I would advocate. I would take more drastic steps like locking down inbound and outbound ports, and aggressively running heuristic checks on all internal traffic to detect viruses and spyware.

No Comments

Make Mac Screen Lock Secure and Convenient

Mac OS X No Comments

The Macbook I got for work is configured to require the password after the screensaver turns on or the display goes to sleep. By default, the screen is set to sleep after 2 minutes of inactivity on battery and 10 minutes on power adapter. When I work on two computers, alternating between the Macbook and a desktop, I hate having to keep inputting the password on the Macbook to unlock it.

I understand the need for security, but I draw the line when it makes using the Macbook too inconvenient. I don’t want to eliminate the password requirement, I just want the screen locks (which require the password to exit from) not to occur so often.

I considered adjusting the power settings so that the Macbook won’t go to sleep until an hour of inactivity occurs on either battery or power adapter. (Likewise, changing the screen saver to wait an hour.) However, making such a change would cause the battery usage to increase (the display uses a lot of power) and require a shorter interval between charges. (To preserve the battery capacity, I usually use the battery until it is very low before charging. And when charging, I try to give it an opportunity to charge to 100 percent.) While I don’t use the Macbook differently on battery versus power adapter, having to charge and being tethered to the wall socket more often is inconvenient.

macbook_screen_lockI found the solution in “System Preferences”, under the “Security & Privacy” section. There is an option named “Require password [time interval] after sleep or screen saver beings” that controls when the screen lock activates. I changed the time interval from the initial 5 seconds to 1 hour. (There are 7 selectable time intervals ranging from immediately to “4 hours”.) Now, when the screen saver runs or the Macbook goes to sleep (for example, when I close the lid), I don’t need to input the password when I wake the Macbook before the 1 hour interval expires.

This setting gave me a good compromise between security and convenience. I am not required to input the password for any inactivity less than an hour and I can leave the power (and screen saver) settings on battery conservation mode.

But what if I need to put the Macbook immediately into screen lock mode? The answer surprisingly lies in the “Keychain Access” application. To support manually locking the Mac, do the following:

  1. Run the “Keychain Access” application (under /Applications/Utilities directory).
  2. Go to the “Keychain Access” menu, Preferences, General, and check the “Show keychain status in menu bar” option.

You should now see a lock icon on the top-right menu bar. When you want to manually lock the Mac, click on the lock icon and select “Lock Screen”.

Hopefully the above will help you to secure your Mac without making it too inconvenience to use.

Note: The “Lock Screen” method above was gotten from Quickly lock your screen. Unfortunately, on Mac OS X Mountain Lion, the re-arrange menu bar icon function (hold Cmd and drag icon left or right) didn’t work so I was not able to get a keyboard shortcut working for “Lock Screen”.

No Comments

Nginx HTTPS SSL and Password-Protecting Directory

Linux 1 Comment

See my previous post, Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists, to learn how to configure multiple domains and get Postfix email and Mailman mailing lists working on an unmanaged VPS. In this post, I will configure Nginx to enable HTTPS SSL access and password-protect a directory.

Note: Though I’m doing the work on a Digital Ocean VPS running Ubuntu LTS 12.04.3, the instructions may also apply to other VPS providers.

Enable HTTPS/SSL Acess

I have a PHP application which I want to secure. If I use HTTP, then the information sent back from the server to my browser is in clear text (and visible to anyone sniffing the network). If I use HTTPS (HTTP Secure) with a SSL (Secure Sockets Layer) server certificate, then the information will be encrypted. In the steps below, I will configure HTTPS/SSL to work for a domain and then force HTTPS/SSL access on a particular directory (where the PHP application would be located).

To get HTTPS working, we need a SSL server certificate. While you can get a 3rd party certificate authority to issue a SSL certificate for your domain for about $10 per year, I only need a self-signed certificate for my purpose. A 3rd party issued SSL certificate is convenient because if the browser trusts the 3rd party certificate authority by default, the browser won’t prompt you to accept the SSL certificate like it would for a self-signed certificate (which the browser can’t establish a chain of trust on). If you run a business on your website, I recommend investing in a 3rd party SSL certificate so that your website would behave professionally.

Create a self-signed SSL server certificate by running these commands on the server:

Note: You don’t need to input the lines that start with the pound character # below because they are comments.

# Create a directory to store the server certificate.
sudo mkdir /etc/nginx/ssl

# Change to the newly-created ssl directory.  Files created below will be stored here.
cd /etc/nginx/ssl

# Create a private server key.
sudo openssl genrsa -des3 -out server.key 1024
   # Remember the passphrase you entered; we will need it below.

# Create certificate signing request.
# (This is what you would send to a 3rd party authority.)
sudo openssl req -new -key server.key -out server.csr
   # When prompted for common name, enter your domain name.
   # You can leave the challenge password blank.

# To avoid Nginx requiring the passphrase when restarting,
# remove the passphrase from the server key. (Otherwise, on
# reboot, if you don't input the passphrase, Nginx won't run!)
sudo mv server.key server.key.pass
sudo openssl rsa -in server.key.pass -out server.key

# Create a self-signed certificate based upon certificate request.
# (This is what a 3rd party authority would give back to you.)
sudo openssl x509 -req -days 3650 -in server.csr -signkey server.key -out server.crt

Note: I set the certificate expiration time to 3650 days (10 years); 3rd party certificates will usually expire in 365 days (1 year). The maximum expiration days you can input is dependent upon the OpenSSL implementation. Inputting 36500 days (100 years) would probably fail due to math overflow errors (once you convert 100 years into seconds, the value is too big to store in a 32bit variable). I believe the highest you can go is about 68 years, but I haven’t tested it.

Configure Nginx to use the SSL server certificate we created by editing the server block file for the domain you want to use it on:

sudo nano /etc/nginx/sites-available/domain2

In the “domain2″ server block file, find the commented-out “HTTPS server” section at the bottom, uncomment it, and edit it to look like the following:

# HTTPS server
#
server {
        listen 443;
        server_name mydomain2.com www.mydomain2.com;

        root /var/www/mydomain2;
        index index.php index.html index.htm;

        ssl on;
        ssl_certificate /etc/nginx/ssl/server.crt;
        ssl_certificate_key /etc/nginx/ssl/server.key;

#       ssl_session_timeout 5m;
#
#       ssl_protocols SSLv3 TLSv1;
#       ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
#       ssl_prefer_server_ciphers on;

        location / {
                try_files $uri $uri/ /index.php;
        }

        # pass the PHP scripts to FPM-PHP
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.soc;
                fastcgi_index index.php;
                include fastcgi_params;
        }
}

Note: The “HTTPS Server” section looks like the “HTTP Section” we configured previously at the top, except for the addition of “listen 443″ (port 443 is the HTTPS port) and the SSL enabling configurations.

Open up the HTTPS port in the firewall and reload Nginx by running these commands on the server:

# Allow HTTPS port 443.
sudo ufw allow https

# Double-check by looking at the firewall status.
sudo ufw status

# Reload Nginx so changes can take effect.
sudo service nginx reload

Test by browsing to “https://mydomain2.com/”. When the browser prompts you to accept the self-signed server certificate, answer Yes.

Require HTTPS/SSL Access on a Directory

To require HTTPS/SSL-only access on a particular subdirectory under the domain, we need to add a directive to the domain’s HTTP Server to redirect to the HTTPS Server whenever a browser accesses that directory.

Note: Apache uses a .htaccess file to allow users to configure such actions as redirecting or password-protecting directories. Nginx does not use .htaccess; instead, we will put such directives in the server block files.

Create a secure test directory by running these commands on the server:

# Create a secure test directory.
sudo mkdir /var/www/mydomain2/secure

# Create a secure test page.
sudo nano /var/www/mydomain2/secure/index.html
   # Input this content:
   <html><body>
   This page is secure!
   </body></html>

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2/secure

Edit the domain’s server block file by running this command on the server:

sudo nano /etc/nginx/sites-available/domain2

Under the “domain2″ server block file, in the “HTTP Section” at the top (not the “HTTPS Section” at the bottom), add these lines to do the redirect:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6
        ...

        # Redirect mydomain2.com/secure to port 443.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        location /secure {
                rewrite ^ https://$host$request_uri permanent;
        }

        ...
        location / {
        ...
}

Restart Nginx so the changes above can take effect.

sudo service nginx reload

Test by browsing to “http://mydomain2/secure/” and the browser should redirect to “https://mydomain2/secure/”.

Password-Protect a Directory

By password-protecting a directory (aka requiring basic authentication), when a browser accesses that directory, the user will get a dialog asking for the user name and password. To get this functionality working, we will create a user and password file and configure the Nginx server block to require basic authentication based upon that file.

Note: Accessing a password-protected directory over HTTP would result in the user and password being sent in clear text by the browser to the server.

Create a protected test directory by running these commands on the server:

# Create a protected test directory.
sudo mkdir /var/www/mydomain2/protect

# Create a protected test page.
sudo nano /var/www/mydomain2/protect/index.html
   # Input this content:
   <html><body>
   This page is password-protected!
   </body></html>

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2/protect

We will need a utility from Apache to create the user and password file. Run this command on the server to install and use it:

# Install htpasswd utility from Apache.
sudo apt-get install apache2-utils

# Create a user and password file using htpasswd.
sudo htpasswd -c /var/www/mydomain2/protect/.htpasswd myuser

# Add an additional user using htpasswd without "-c" create parameter.
sudo htpasswd /var/www/mydomain2/protect/.htpasswd myuser2

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown www-data:www-data /var/www/mydomain2/protect/.htpasswd

Note: If you move the “.htpasswd” file to another location (say, not under the domain’s document root), make sure that the “www-data” user or group can access it; otherwise, Nginx won’t be able to read it.

Edit the Nginx server block file by running this command on the server:

sudo nano /etc/nginx/sites-available/domain2

In the “domain2″ server block file, in the “HTTP Section” at the top (not the “HTTPS Section” at the bottom), add these lines to password-protect the “/protect” directory:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6
        ...

        # Password-protect mydomain2.com/protect directory.
        # Please put this before location / block as
        # Nginx stops after seeing the first match.
        location /protect {
                auth_basic "Restricted"; # Enable Basic Authentication
                auth_basic_user_file /var/www/mydomain2/protect/.htpasswd;
        }

        ...
        location / {
        ...

        # Uncomment this section to deny access to .ht files like .htpasswd
        # Recommend to copy this to the HTTPS server below also.
        location ~ /\.ht {
                deny all;
        }

    ...
}

Restart Nginx so the changes above can take effect.

sudo service nginx reload

Test by browsing to “http://mydomain2/protect/” and the browser should prompt you to input a user and password.

Secure Mailman

To run Mailman under HTTPS/SSL, move the “location /cgi-bin/mailman” definition in the server block file, “/etc/nginx/sites-available/mydomain2″, from the HTTP server to the HTTPS server section.

You will also need to modify Mailman to use the HTTPS url:

# Edit Mailman's configuration
sudo nano /etc/mailman/mm_cfg.py
   # Change its default url pattern from 'http://%s/cgi-bin/mailman/' to:
   DEFAULT_URL_PATTERN = 'https://%s/cgi-bin/mailman/'

# Propagate the HTTPS URL pattern change to all the mailists
sudo /usr/lib/mailman/bin/withlist -l -a -r fix_url

Note: It is not necessary to restart the Mailman service for the changes above to take effect.

If you only want the default URL Pattern change to apply to a specific mailing list, like “test@mydomain2.com”, use this command instead:

sudo /usr/lib/mailman/bin/withlist -l -r fix_url test -u mydomain2.com

Take a Snapshot

Digital Ocean provides a web tool to take a snapshot image of the VPS. I can restore using that image or even create a duplicate VPS with it. Because my VPS is now working the way I need it to, it makes sense to take a snapshot at this time.

Unfortunately, performing a snapshot requires that I shutdown the VPS first. More unfortunate, the time required to take the snapshot varies from minutes to over an hour (more on this below). Worst, there is no way to cancel or abort the snapshot request. I have to wait until Digital Ocean’s system completes the snapshot request before my VPS is automatically restarted.

digitalocean_snapshot_stuckI did my first snapshot after getting WordPress working on the VPS. There was about 6GB of data (including the operating system) to make an image of. I shut down the VPS and submitted a snapshot request. The “Processing…” status with zero progress was what I saw for over one hour. During this time, my VPS and WordPress site was offline.

A little over an hour later, the status went from “Processing…” with zero progress to done in a split second. My VPS and WordPress site were back online. I think an hour to backup 6GB of data is excessive. Digital Ocean support agreed. Evidently, there was a backlog on the scheduler and requests were delayed. Because I couldn’t cancel the snapshot request, I had to wait for the backlog to clear in addition to however long it took to do the snapshot.

If I had known more about the snapshot feature, I would have opted to pay for the backup feature, which cost more but doesn’t require shutting down the VPS. Unfortunately, the backup feature can only be enabled during VPS creation so it is too late for me.

The recommended method to shutdown the VPS is to run this command:

sudo poweroff

Update: I just did a snapshot and it only took 5 minutes this time.

Most info above derived from:

1 Comment

Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists

Linux No Comments

See my previous post, Install Ubuntu, LEMP, and WordPress on an Unmanaged VPS, to learn how to set up an unmanaged VPS with Ubuntu, LEMP, and WordPress. In this post, I will configure Nginx to support multiple domains (aka virtual hosts) on the VPS, get Postfix email send and receive working, and install a Mailman mailing list manager.

Note: Though I’m doing the work on a Digital Ocean VPS running Ubuntu LTS 12.04.3, the instructions may also apply to other VPS providers.

Host Another Domain

To host another domain (say mydomain2.com) on the same VPS, we need to add another Nginx server block (aka virtual host) file. Run the commands below on the server.

Note: You don’t need to input the lines that start with the pound character # below because they are comments.

# Create a new directory for the new domain
sudo mkdir /var/www/mydomain2

# Create a test page.
sudo nano /var/www/mydomain2/index.html
   # Input this content:
   <html><body>
   Welcome to mydomain2.com.
   </body></html>

# Change owner to www-data (which Nginx threads run as) so Nginx can access.
sudo chown -R www-data:www-data /var/www/mydomain2

# Create a new Nginx server block by copying from existing and editing.
sudo cp /etc/nginx/sites-available/wordpress /etc/nginx/sites-available/mydomain2
sudo nano /etc/nginx/sites-available/mydomain2
        # Change document root from "root /var/www/wordpress;" to:
        root /var/www/mydomain2;
        # Change server name from "server_name mydomain.com www.mydomain.com;" to:
        server_name mydomain2.com www.mydomain2.com;

# Activate new server block by create a soft link to it.
sudo ln -s /etc/nginx/sites-available/mydomain2 /etc/nginx/sites-enabled/mydomain2

# Reload the Nginx service so changes take effect.
sudo service nginx restart

The server block files allow Nginx to match the “server_name” domain to the inbound URL and to use the matching “root” directory. When a browser connects to the VPS by IP address (and thus, doesn’t provide a domain for matching), Nginx will use the first virtual host that it loaded from the “/etc/nginx/sites-enabled/” directory (the order of which could change every time you reload Nginx).

To select a specific virtual host to load when accessed by IP address, edit the related server block file under “/etc/nginx/sites-available/” directory and add a “listen 80 default” statement to the top like so:

server {
        #listen   80; ## listen for ipv4; this line is default and implied
        #listen   [::]:80 default ipv6only=on; ## listen for ipv6
        listen 80 default;

Note: The “listen 80 default;” line should only be added to one of the server block files. The behavior may be unpredictable if you add it to more than one block file.

Send Email (using Postfix)

We will install Postfix, a Mail Transfer Agent which works to route and deliver email, on the VPS to support sending and receiving mail. WordPress (and its plugins like Comment Reply Notification) uses Postfix to send emails. While we could use a more simple, send-only mail transfer agent like Sendmail, we will need Postfix later when we install Mailman (a mailing list service) which depends on it. In this section, we will configure Postfix and test the send mail function.

Before we start, we need to talk about Postfix. Postfix is very sophicated and can be configured in many different ways to receive mail. I want to suggest one way which I believe works well for many domains on a VPS. The setup I’m suggesting is to have one default local delivery domain (mydomain.com) and many virtual alias domains (mydomain2.com, etc). A local delivery domain is an endpoint domain, meaning that when mail arrives there, it is placed into the local Linux user’s mailbox. A virtual alias domain is used to route mail sent to it to a local delivery domain.

For example, if you send an email to “susan@mydomain2.com” (virtual alias domain), Postfix will route the email to “susan@mydomain.com” (local delivery domain), and then delivered the mail to the local Linux “susan” user’s inbox.

Keep the above in mind as we configure Postfix and hopefully everything will be understandable. We will go step by step and build upon our understanding. We will get the local delivery domain working first and then later, add the virtual alias domain into the mix.

Install Postfix by running these commands on the server:

# Install Postfix package and dependencies.
sudo apt-get install postfix
   # Select "Internet Site" and input our local delivery domain (mydomain.com).

# Configure Postfix to use local delivery domain.
sudo nano /etc/postfix/main.cf
   # Update "myhostname = example.com" to:
   myhostname = mydomain.com
   # Double-check that "mydestination" includes local delivery domain:
   mydestination = mydomain.com, localhost, localhost.localdomain

# Reload Postfix so changes will take effect.
sudo service postfix reload

Note: I had trouble inputting the domain name when installing Postfix and ended up with a combination of the default “localhost” and my domain name, specifically “lomydomain.com”. To fix this, I had to modify the “mydestination” value in “/etc/postfix/main.cf” and in the content of “/etc/mailname” file to be the correct domain name. The “myorigin” value in “/etc/postfix/main.cf” file references the “/etc/mailname” file.

The Postfix service should be started already and it is configured to start on boot by default. To send a test email, use the Sendmail command line (Sendmail was installed as dependency of the Postfix installation) on the server:

sendmail emailto@domain.com

To: emailto@domain.com
Subject: PostFix Test Email

This is the subject of a test email sent after configuring Postfix.

# Press CTRL-D key combo to end

Note: The From email address is constructed based upon the currently logged-in Linux username and the “myorigin” value in “/etc/postfix/main.cf” (which in turn, points at the “/etc/mailname” file that contains the local delivery domain name). Thus, the From address should be “mynewuser@domain.com”).

To test the PHP send mail feature, run the following commands on the server:

# Install the PHP command line package.
apt-get install php-cli

# Enable logging of PHP-CLI errors to a file
sudo nano /etc/php5/cli/php.ini
   # Add this line:
   error_log = /tmp/php5-cli.log
   # You must use a writeable directory like /tmp.

# Open the PHP interpretive shell and call the mail() function
php -a
php > mail('emailto@domain.com', 'Subject here', 'Message body here');
php > exit();

If the PHP mail function works, then most likely WordPress and its plugins should be able to send emails. To be absolutely sure, you can install the Check Email plugin to test sending an email from within WordPress.

Receive Mail (to Local Delivery Domain)

By default, Postfix is configured to deliver emails sent to “postmaster@mydomain” to the local Linux “root” user’s inbox. You can tell this from the following Postfix settings:

cat /etc/postfix/main.cf
   ...
   alias_maps = hash:/etc/aliases
   mydestination = mydomain.com, localhost, localhost.localdomain
   ...

cat /etc/aliases
   ...
   postmaster: root

In the Postfix’s “main.cf” file, the “alias_maps” value points to an email-to-local-user mapping file for the local delivery domain, and the “mydestination” value contains the default local delivery domain “mydomain.com” (ignore the localhost entries).

In the “alias_maps” file “/etc/aliases”, the “postmaster” email username is mapped to the root user’s inbox. Putting it together, any email sent to “postmaster@mydomain.com” will be delivered to the root user’s inbox.

Note: Mail can be delivered to any of the local Linux users by using their exact usernames, even though they are not listed in “alias_maps”. For example, emails sent to “root@mydomain.com” will be delivered to the local root user and emails sent to “mynewuser@mydomain.com” will be delivered to the local mynewuser user.

To receive external emails sent to the VPS, we need to open up the SMTP (Simple Mail Transfer Protocal) port in the firewall and create a DNS MX (Mail exchange) record. (Port 25 is the default SMTP port use for receiving emails.)

To open up the SMTP port, run the following commands on the server:

# Allow SMTP port 25.
sudo ufw allow smtp

# Double-check by looking at the firewall status.
sudo ufw status

I used Digital Ocean’s DNS management web interface to add a MX record pointing at “@” (which is the A record that resolves to my_ip_address) and priority 10 for mydomain.com. The priority allows us to add more than one MX record and determines the order of mail servers to submit the emails to. Rather than using the highest priority 0, using priority 10 will allow me to easily add a mail server before or after this one in the future.

Note: Most websites will suggest creating a CNAME record (redirecting “mail” to “@”) for mail.mydomain.com and then configuring the MX record to point at mail.mydomain.com. This is not necessary. The most simple configuration is to just point the MX record to the A record “@” as I did above.

To see if the DNS was updated with the MX record, I ran the following test command on the server (or any Linux machine):

dig mydomain.com MX @ns1.digitalocean.com

# technical details are returned, the most important is the ANSWER SECTION.
;; ANSWER SECTION:
mydomain.com.         1797    IN      MX      10 mydomain.com.

In the above example “ANSWER SECTION”, we can see that the MX record for mydomain.com points at mydomain.com (as the mail receiving server) with priority 10 (as configured). The 1797 value is the TTL (Time to Live) setting in seconds (1797 seconds is about 29.95 minutes) which indicates how long this MX record is valid for. DNS servers which honor this TTL setting will refresh at that rate; however, some DNS servers may ignore the TTL value in favor of much longer refresh times. (The A and CNAME records also have TTL values. Digital Ocean does not allow me to customize the TTL values for any DNS record.)

If the “ANSWER SECTION” is missing from the output, then your VPS provider may not have updated its DNS servers yet. (Digital Ocean DNS servers took 20 minutes to update the MX record.) Similar to the mydomain.com’s A and CNAME record changes, you may need to wait for the MX record to propagate across the internet (most DNS servers will be updated in minutes, while some may take hours).

Also, you can use the intoDNS website to check your MX record details. Input your domain name, click on the Report button, and look for the “MX Records” section. If your domain’s MX record shows up there, you can be reasonably certain that it has propagated far enough for you to start sending emails to your domain.

Test by sending an email to “postmaster@mydomain.com” from your local mail client (or Google Mail or Yahoo Mail). To see if the mail was received, do the following on your server:

# View the root user's received mail store for the email you sent.
sudo more /var/spool/mail/root

# Alternatively, install the mail client to view and delete received emails.
sudo apt-get install mailutils
# Read mail sent to the local root user.
sudo mail
   # type "head" to see a list of all mail subject lines.
   # type a number (ex: 1) to see the mail content.
   # type "del 1" to delete that mail.
   # type "quit" to exit mail client.

Note: If you want to check a non-root user’s inbox, log in as that non-root user and just run “mail”, instead of “sudo mail”.

Note: According to Internet standards, all mail-capable servers should be able to receive emails sent to their “postmaster” address. While it may be overkill, I decided to create MX records and postmaster aliases for all the domains that I host on the VPS.

Receive Mail (to other Virtual Alias Domains)

Now that we know that the server can receive emails, we want to configure Postfix to support emails sent to the multiple domains hosted on the VPS. (If you want more local users than “root” and “mynewuser”, use the “adduser” command per the previous post to create new users.)

Recall our earlier discussion about how a virtual alias domain will route mail to the local delivery domain (“mydomain.com”), which will finally deliver the mail to the local user’s inbox. We will configure the additional domains like “mydomain2.com” to be virtual alias domains.

First, we need to create a mapping file that will map from the virtual alias domain to the local delivery domain. Run this command on the server to create that file:

sudo nano /etc/postfix/virtual

In the “virtual” mapping file, input the following lines:

mydomain2.com IGNORE # Declare virtual alias domain
postmaster@mydomain2.com postmaster

In the first line, we are using a newer feature of Postfix to declare a virtual alias domain “mydomain2.com” by starting the line with it. (The previous, alternative method was put the virtual alias domain declarations into a “virtual_alias_domains” property in the “/etc/postfix/main.cf” file.) The rest of the first line, “IGNORE …”, is ignored. The second line indicates that mail sent to “postmaster@mydomain2″ should be routed to “postmaster” at the local delivery domain; that is, “postmaster@mydomain.com”.

Configure Postfix to use the new virtual alias domain mapping file:

sudo nano /etc/postfix/main.cf
   # Add a new virtual_alias_maps line:
   virtual_alias_maps = hash:/etc/postfix/virtual

# Update the hash db version of /etc/postfix/virtual that Postfix uses.
sudo postmap /etc/postfix/virtual

# Reload Postfix so changes take effect.
sudo service postfix reload

Test by sending an email to the email address configured in the virtual alias mapping file; in this case, “postmaster@domain2.com”. Per the previous instruction, check the root user’s inbox by using the “sudo mail” command.

Install Mailing List Service (Mailman)

Besides supporting mailing lists, Mailman (GNU Mailing List Manager) allows mailing list administration by web interface or by sending email messages (like subscribe or unsubscribe messages).

To install Mailman, run the following commands on the server:

# Install Mailman
sudo apt-get install mailman

# Create a mandatory site list named mailman
sudo newlist mailman

The “newlist” command will request the following:

To finish creating your mailing list, you must edit your /etc/aliases (or
equivalent) file by adding the following lines, and possibly running the
`newaliases' program:

## mailman mailing list
mailman:              "|/var/lib/mailman/mail/mailman post mailman"
mailman-admin:        "|/var/lib/mailman/mail/mailman admin mailman"
...

Ignore that instruction. You don’t need to manually edit the Postfix “/etc/aliases” file. Later on, we will configure Mailman to automatically generate its own aliases file, which Postfix will read from.

Once the site wide “mailman” list is created, we can start the Mailman service by running this command:

sudo service mailman start

Mailman is configured to start on boot by default. (Running “sudo service mailman status” won’t output anything useable; to see if Mailman is running, list its processes using “ps -aef | grep -i mailman” instead.)

To get Mailman’s web interface working, we will need to install FcgiWrap (Simple CGI support) so that Nginx can integrate with Mailman. FcgiWrap works similarly to how PHP-FPM (FastCGI Process Manager for PHP) was used by Nginx to pass the processing of PHP files to the PHP platform. FcgiWrap will be used by Nginx to pass the Mailman-related interface calls to Mailman.

To install FcgiWrap, run the following command on the server:

sudo apt-get install fcgiwrap

FcgiWrap will be started automatically after installation. By default, FcgiWrap is configured to start at boot time. FcgiWrap uses a unix socket file “/var/run/fcgiwrap.socket” (similar to how PHP-FPM uses “/var/run/php5-fpm.soc”) to communicate with Mailman. (Similar to Mailman, running “service fcgiwrap status” won’t output anything useable; to see if FcgiWrap is running, list its processes using “ps -aef | grep -i fcgiwrap” instead.)

Edit the Nginx server block file belonging to the domain that you want to make the Mailman web interface accessible under. For example, run this command on the server:

sudo nano /etc/nginx/sites-available/mydomain2

In the mydomain2 server block file, add the following lines to the end of the “service” section:

service {
        ...

        location /cgi-bin/mailman {
               root /usr/lib/;
               fastcgi_split_path_info (^/cgi-bin/mailman/[^/]*)(.*)$;
               fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
               include /etc/nginx/fastcgi_params;
               fastcgi_param PATH_INFO $fastcgi_path_info;
               #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
               fastcgi_intercept_errors on;
               fastcgi_pass unix:/var/run/fcgiwrap.socket;
        }
        location /images/mailman {
               alias /usr/share/images/mailman;
        }
        location /pipermail {
               alias /var/lib/mailman/archives/public;
               autoindex on;
        }
}

Note: I did two things above differently from what most websites would say to do:

  • I put the “fastcgi_param SCRIPT_FILENAME …” line before “include /etc/nginx/fastcgi_params;” to avoid it getting overwritten by “fastcgi_params”; otherwise, the call to Mailman would fail with a 403 Forbidden Access error message.
  • I commented out “fastcgi_param PATH_TRANSLATED …” because it is not necessary.

Reload Nginx to make the changes take effect:

sudo service nginx reload

You can now browse to the following Mailman administrative pages:

  • http://mydomain2.com/cgi-bin/mailman/admin/mylistname – manage the “mylistname” list.
  • http://mydomain2.com/cgi-bin/mailman/listinfo/mylistname – show the users belonging to “mylistname” list.
  • http://mydomain2.com/pipermail – view the mailing list archives.

We still need to integrate Mailman with Postfix so that emails sent to mailing lists, especially those belonging to virtual alias domains, will be routed to Mailman by Postfix.

Edit the Mailman configuration by running this command on the server:

sudo nano /etc/mailman/mm_cfg.py

In the “mm_cfg.py” file, add or uncomment and modify these lines like so:

MTA = 'Postfix'
POSTFIX_STYLE_VIRTUAL_DOMAINS = ['mydomain2.com']

Mailman has the capability of generating aliases for Postfix. We will use that capability. Run these commands on the server:

# Create Mailman's aliases and virtual-mailman files.
sudo /usr/lib/mailman/bin/genaliases

# Make the generated files group-writeable.
sudo chmod g+w /var/lib/mailman/data/aliases*
sudo chmod g+w /var/lib/mailman/data/virtual-mailman*

Note: The “genaliases” command will generate “aliases”, “aliases.db”, “virtual-mailman”, and “virtual-mailman.db” files in the “/var/lib/mailman/data” directory.

We then add the generated Mailman aliases and virtual aliases files to the Postfix “alias_maps” and “virtual_alias_maps” properties.

To edit Postfix, run this command on the server:

sudo nano /etc/postfix/main.cf

In the Postfix “main.cf” file, add to the end of the “alias_maps” and “virtual_alias_maps” lines like so:

alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases
virtual_alias_maps = hash:/etc/postfix/virtual, hash:/var/lib/mailman/data/virtual-mailman

Note: The changes above will configure Postfix to read and process Mailman’s generated aliases files, in addition to its own aliases files.

Reload Postfix to have the changes take effect:

sudo service postfix reload

Recall earlier, I said to ignore the instructions by “newlist” to add Mailman aliases to the Postfix “/etc/aliases” file because we would do it automatically later. That is just what we did above.

Look at the Mailman’s generated “aliases” file by running this command on the server:

sudo cat /var/lib/mailman/data/aliases

# STANZA START: mailman
# CREATED: Tue Mar 25 05:53:44 2014
mailman:             "|/var/lib/mailman/mail/mailman post mailman"
mailman-admin:       "|/var/lib/mailman/mail/mailman admin mailman"
...
# STANZA END: mailman

It should look exactly like the aliases outputted by the “newlist” command. Mailman’s generated “aliases” file is included in Postfix’s “alias_maps” and thus is processed by Postfix along with the contents of the original “/etc/aliases” file.

To test a mailing list belonging to a virtual alias domain, run these commands on the server:

# Create a test mailing list.
sudo newlist test@mydomain2.com

# Reload Postfix to make changes take effect.
sudo service postfix reload

The “newlist” command will automatically update Mailman’s “aliases” and “virtual-mailman” aliases file with entries for “test@mydomain2″. However, we still need to manually reload Postfix so that Postfix will pick up the changes. (Reloading Postfix requires sudo/root access, so Mailman can’t do it automatically).

Let’s look at Mailman’s updated “aliases” and “virtual-mailman” files to see what was added (the pre-existing, generated “mailman” list aliases are omitted below):

sudo cat /var/lib/mailman/data/aliases

# STANZA START: test
# CREATED: Tue Mar 25 05:56:47 2014
test:             "|/var/lib/mailman/mail/mailman post test"
test-admin:       "|/var/lib/mailman/mail/mailman admin test"
...
# STANZA END: test

sudo cat /var/lib/mailman/data/virtual-mailman

# STANZA START: test
# CREATED: Tue Mar 25 05:56:47 2014
test@mydomain2.com              test
test-admin@mydomain2.com        test-admin
...
# STANZA END: test

Recall that a virtual alias domain routes to a local delivery domain, which then delivers to an endpoint (inbox or in the case above, a program called Mailman). For example, when a mail is sent to the “test@mydomain2″ mailing list, it is routed to “test@mydomain.com” (local delivery domain), and then passed to the “mailman post test” program, which then forwards a copy to each member of the “test” mailing list.

Note: Because all mailing lists also exist under the local delivery domain, the mailing list name must be unique across all the domains hosted on the machine.

To test, access the Mailman web interface at “http://mydomain2.com/cgi-bin/mailman/admin/test” to add members to the “test@mydomain2.com” mailing list. Then send an email to that mailing list and its members should each receive a copy.

Once, you are done testing, you can delete the list by running this command on the server:

# Remove list test@mydomain2.com (don't include @mydomain2.com part below).
sudo /usr/lib/mailman/bin/rmlist -a test

# Reload Postfix to make changes take effect.
sudo service postfix reload

Debugging Mail

Both Postfix and Mailman will output error messages and debug logs to:

/var/log/mail.err
/var/log/mail.log

At this point, my VPS is hosting several domains, I can send and receive emails, and I have mailing lists working. See my followup post, Nginx HTTPS SSL and Password-Protecting Directory, to learn how to configure Nginx to enable HTTPS SSL access and to password-protect a directory.

Most info above derived from:

No Comments

Install Ubuntu, LEMP, and WordPress on an Unmanaged VPS

Linux 4 Comments

Before this post, I was hosting my website using a shared web hosting provider. Shared web hosting is convenient because the provider takes care of the software platform and its security updates (though I am still responsible to update a PHP application like WordPress). And if there is a problem with the platform, the provider is responsible to fix it. Unfortunately, shared web hosting may have performance and scalability issues (resulting from overcrowded websites on the single shared server and strict restrictions on CPU and memory usage) and disallows non-PHP software installation such as a Subversion server.

With the above in mind, I decided to look into unmanaged VPS (virtual private server) hosting as an alternative to shared web hosting. A virtual server is cheaper than a physical server and an unmanaged server is cheaper than a managed server. A managed VPS provider would install the software stack for me and provide support for about $30 or more per month. An unmanaged VPS depends on me to install the software and only costs $5 per month with Digital Ocean. The downside to unmanaged VPS is that if anything goes wrong with the software, I am responsible to fix it.

In this post, I will outline the steps I took to install WordPress on an unmanaged VPS hosted by Digital Ocean. Most of these instructions may be applicable to other VPS providers.

Create VPS

When creating a VPS, the most important choice is the operating system. I recommend getting the latest Ubuntu Server LTS (long-term support) version, currently 12.04.4. All up-to-date software packages should support the LTS version of Ubuntu so it is a safe choice to make. Unfortunately, Digital Ocean only offered the LTS version 12.04.3 so I chose that. Because it will be a long time, if ever, before I would need a VPS with more than 4GB memory, I decided to choose the 32bit version to keep memory usage as minimal as possible.

You should have an IP address and root password for your VPS before proceeding.

Secure VPS

Remote access to the VPS is accomplished by SSH (Secure Shell). (If you know telnet, think of SSH as an encrypted version of telnet.) By default, servers are setup to use SSH with port 22 and user root. Unsophisticated hackers would attempt to gain access to a server using those settings and a brute force password generator. While a very hard to guess root password would make the server more secure, it is even better to change the SSH port number and use a non-root user.

Note: Lines below that start with the pound character # are comments and you don’t need to input them.

Run these commands:

# Connect to your server.
ssh root@my_ip_address

# Change the root password.
passwd

# Create a new non-root user.
adduser mynewuser

We will configure the new user to execute commands with root privileges by using the sudo (super user) tool. Sudo involves pre-pending all commands with the word “sudo”. Sudo will prompt for the root password. (You can also configure sudo to log all commands issued using it.) We will grant all sudo privileges to the new user by running the “visudo” command (a shortcut to open up the sudo configuration file “/etc/sudoers” under the vi or nano editor, which ever is configured as the default text editor) and adding a line under the “User privilege specification” section for the new user like so:

# User privilege specification
root       ALL=(ALL:ALL) ALL
mynewuser  ALL=(ALL:ALL) ALL

To disallow SSH root login and to change the SSH port number (say from 22 to 3333), edit the SSH configuration file:

sudo nano /etc/ssh/sshd_config

In the “sshd_config” file, change the value for the “Port” and “PermitRootLogin” variables like below. Also, add the last two lines to permit SSH access to only the new user.

# What ports, IPs and protocols we listen for
Port 3333

# Authentication:
PermitRootLogin no

# Restrict user login
UseDNS no
AllowUsers mynewuser

Reload the SSH service so the changes can take effect:

sudo reload ssh

Test the new settings by opening up a command window on your client and running the following commands:

ssh -p 3333 root@my_ip_address
ssh -p 3333 mynewuser@my_ip_address

The attempt to SSH using the root user should fail. The attempt using the new user should succeed. If you cannot SSH into the server with the new user, double-check the changes using your original SSH window (which should still be connected to your server). If you don’t have that original SSH window still connected, your VPS provider should provide console access (like having a virtual keyboard and monitor connected directly to the VPS) through their website for recovery scenarios such as this.

The UFW (Uncomplicated Firewall) tool allows us to easily configure the iptables firewall service, which is built into the Ubuntu kernel. Run these commands on the server:

# Allow access to custom SSH port and HTTP port 80.
sudo ufw allow 3333/tcp
sudo ufw allow http

# Enable the firewall and view its status.
sudo ufw enable
sudo ufw status

The above steps configure a basic level of security for the VPS.

Install LEMP

WordPress requires an HTTP server, PHP, and MySQL. The LEMP (Linux, Nginx, MySQL, PHP) software stack matches those requirements. (Nginx is pronounced “engine-ex” which explains where the “E” acronym came from.) You may be more familiar with the LAMP stack, which uses Apache instead of Nginx as the HTTP server. Nginx is a high-performance HTTP server which uses significantly less CPU and memory than Apache would under high load situations. By using Nginx, we allow for the capability of handling greater numbers of page requests than usual.

On the server, run these commands:

# Update installed software packages.
sudo apt-get update

# Install MySQL.
sudo apt-get install mysql-server php5-mysql
sudo mysql_install_db

# Secure MySQL.
sudo /usr/bin/mysql_secure_installation

# Do a test connect to MySQL service.
mysql -u root -p
mysql> show databases;
mysql> quit

When installing MySQL, you will be prompted to input a MySQL root password. If you leave it blank, you will have another opportunity to change it when running the “mysql_secure_installation” script. You will want to answer yes to all the prompts from the “mysql_secure_installation” script to remove anonymous MySQL users, disallow remote MySQL root login, and remove the test database.

MySQL is not configured to start on boot by default. To start MySQL at boot time, run only the first command below:

# Start MySQL at boot time.
sudo update-rc.d mysql defaults

# FYI, undo start MySQL at boot time.
sudo update-rc.d -f mysql remove

If you have issues connecting to MySQL, you can start MySQL in an unsecured safe mode (which bypasses the password requirement) to perform a recovery action such as resetting the MySQL root password like so:

# Stop normal MySQL service and start MySQL in safe mode.
sudo service mysql stop
sudo mysqld_safe --skip-grant-tables &

# Connect to MySQL, change root password, and exit.
mysql -u root
mysql> use mysql;
mysql> update user set password=PASSWORD("newrootpassword") where User='root';
mysql> flush privileges;
mysql> quit

# Stop MySQL safe mode and start normal MySQL service.
sudo mysqladmin -u root -p shutdown
sudo service mysql start

Install and start Nginx by running these commands on the server:

sudo apt-get install nginx
sudo service nginx start

Browse to your server’s IP address and you should see a “Welcome to nginx!” page.

To make it possible for Nginx to serve PHP scripts, we need to install the PHP platform and the PHP-FPM (FastCGI Process Manager for PHP) service. PHP-FPM enables Nginx to call the PHP platform to interpret PHP scripts. PHP-FPM should be already installed as dependencies of the “php5-mysql” package (part of the MySQL installation instructions above). We can make sure that PHP-FPM (and its dependency, the PHP platform) is installed by trying to re-install it again (trying to install an already installed package doesn’t do any harm).

# List the installed packages and grep for php name matches:
dpkg --get-selections | grep -i php

# Install PHP-FPM package.
sudo apt-get install php-fpm

# Test the install by displaying the version of PHP-FPM.
php5-fpm -v

Secure and optimize the PHP-FPM service by running these commands on the server:

# Fix security hole by forcing the PHP interpreter to only process the exact file path.
sudo nano /etc/php5/fpm/php.ini
   # Change the "cgi.fix_pathinfo=1" value to:
   cgi.fix_pathinfo=0

# Configure PHP to use a Unix socket for communication, which is faster than default TCP socket.
sudo nano /etc/php5/fpm/pool.d/www.conf
   # Change the "listen = 127.0.0.1:9000" value to:
   listen = /var/run/php5-fpm.soc

# Restart the PHP-FPM service to make the changes effective.
sudo service php5-fpm restart

Nginx defines the site host (and each virtual host) in a server block file. The server block file links the domain name to a directory where the domain’s web files (HTML, PHP, images, etc.) are located. When you browse to the VPS, Nginx will serve files from the directory that corresponds to the domain name given by your browser. That is simple explanation of how Nginx can support hosting more than one domain on a single VPS.

Edit the default server block file to support PHP scripts:

sudo nano /etc/nginx/sites-available/default

In the “default” server block file, change the following:

server {
        # Change "index index.html index.htm" to
        index index.php index.html index.htm;
        # Alternatively, for WordPress sites, I recommend only index.php value like so:
        #index index.php

        # Change "server_name localhost;" to:
        server_name mydomain.com www.mydomain.com;

        location / {
                # For WordPress site, change "try_files $uri $uri/ /index.html;" to:
                try_files $uri $uri/ /index.php?$args;
                # If you don't change this, then non-default permalink URLs will fail with 500 error.
        }

        # Uncomment the whole "location ~ \.php$" block except for the "fast_cgi_pass 127.0.0.1:9000;" line.
        # Change the "fastcgi_pass unix:/var/run/php5-fpm.sock" line to be "fastcgi_pass unix:/var/run/php5-fpm.soc"
        # (to match what we configured in the PHP-FPM's php.ini above).
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;

                # With php5-cgi alone:
                #fastcgi_pass 127.0.0.1:9000;
                # With php5-fpm:
                fastcgi_pass unix:/var/run/php5-fpm.soc;
                fastcgi_index index.php;
                include fastcgi_params;
        }

Restart the Nginx service to have the changes take effect:

sudo service nginx restart

Create a PHP test script by running this edit command:

sudo nano /usr/share/nginx/www/info.php

In “info.php” file, input the following text:

<?php
phpinfo();
?>

Browse to “http://my_ip_address/info.php” and you should see a page containing information about the PHP installation.

Both PHP-FPM (php5-fpm) and Nginx are configured to start at boot time by default. You can double-check by running the chkconfig utility to list the services and their runlevel configurations:

# Install chkconfig package.
sudo apt-get install chkconfig

# List all services and their runlevels configurations.
chkconfig --list

Note: You won’t be able to use chkconfig to change the runlevels because it is not compatible with the new Upstart runlevel configuration used by Ubuntu. Instead, use update-rc.d or sysv-rc-conf to make runlevel changes.

Debugging LEMP

To debug issues with LEMP, look at these log files:

MySQL: /var/log/mysql/error.log
Nginx: /var/log/nginx/error.log
PHP: /var/log/php5-fpm.log

For performance reasons, the debug logs from the PHP-FPM worker threads are discarded by default. If you wish to see error logs from your PHP applications, you will need to enable logging from worker threads.

Run the following commands on the server:

# Edit the PHP-FPM worker pool config file to enable logging.
sudo vi /etc/php5/fpm/pool.d/www.conf
   # Uncomment this line:
   catch_workers_output = yes

# Reload the PHP-FPM service to make the changes take effect.
sudo service php5-fpm reload

You should now see error logs from the PHP worker threads outputted to the “/var/log/php5-fpm.log” file.

Install WordPress

Install WordPress by running the following commands on the server:

# Get the latest WordPress version.
cd /tmp
wget http://wordpress.org/latest.tar.gz

# Uncompress the WordPress archive file.
tar zxvf latest.tar.gz

# Create a wp-config.php configuration file by copying from the sample.
cd wordpress
cp wp-config-sample.php wp-config.php

# Move the WordPress files to the Nginx root document directory.
sudo mv wordpress/* /usr/share/nginx/www/

# Change ownership to www-data user (which Nginx worker threads are configured to run under).
sudo chown -R www-data:www-data /usr/share/nginx/www/*

Note: If WordPress detects its configuration file “wp-config.php” is missing, it will offer to run a web-based wizard to create it. However, the wizard won’t work because our MySQL root user requires a password. Besides, using the wizard would not be very secure because the WordPress database’s MySQL user password would be sent in the clear over HTTP. Instead, we manually created the “wp-config.php” file in the above steps and will modify it below.

Create a MySQL database and user for WordPress by running these commands on the server:

# Open a MySQL interactive command shell.
mysql -u root -p

# Create a MySQL WordPress database.
mysql> create database wordpress;

# Create a MySQL user and password.
mysql> create user wordpress@localhost;
mysql> set password for wordpress@localhost = PASSWORD('mypassword');

# Grant the MySQL user full privileges on the WordPress database.
mysql> grant all privileges on wordpress.* to wordpress@localhost identified by 'mypassword';

# Make the privilege changes effective.
mysql> flush privileges;

# Double-check by showing the privileges for the user.
mysql> show grants for wordpress@localhost;

# Exit the MySQL interactive shell.
mysql> quit

Update the WordPress configuration file by running this command:

sudo nano /usr/share/nginx/www/wp-config.php

In the “wp-config.php” file, input the newly-created MySQL database, user, and password like so:

define('DB_NAME', 'wordpress');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'mypassword');

Browse to your server’s IP address and follow the WordPress instructions to complete the installation.

Change WordPress Document Root

This section is optional. If you wish to store the WordPress installation into an alternative directory path, say “/var/www/wordpress”, instead of “/usr/share/nginx/www”, follow the steps below. (I suggest “/var/www/wordpress” instead of “/var/www” so that later, when you host additional domains, the WordPress installation will be in its own separate directory.)

To move WordPress to a new directory, run these commands on the server:

# Move WordPress files to new directory.
sudo mkdir -p /var/www/wordpress
sudo mv /usr/share/nginx/www/* /var/www/wordpress/

# Rename the existing Nginx server block file.
sudo mv /etc/nginx/sites-available/default /etc/nginx/sites-available/wordpress

# Update the Nginx server block file with new location.
sudo nano /etc/nginx/sites-available/wordpress
   # Change document root from "root /usr/share/nginx/www;" to "root /var/www/wordpress;".

# Enable the renamed Nginx server block by creating soft link.
sudo ln -s /etc/nginx/sites-available/wordpress /etc/nginx/sites-enabled/wordpress

# Remove the old Nginx server block soft link (which points at a non-existing file).
sudo rm /etc/nginx/sites-enabled/default

# Reload the Nginx service so the changes can take effect.
sudo service nginx reload

Test this change by browsing to your server’s IP address. You should see the WordPress website.

Migrate WordPress

When migrating an existing WordPress site to your new VPS, I suggest doing the following steps:

  1. On your old WordPress server, update WordPress and all plugins to the latest versions.
  2. On the new WordPress server, browse to “http://my_ip_address/wp-admin/” to install the same theme and plugins as exist on the old server. Activate the theme. Leave all the plugins inactive. When we do the WordPress database restore, the plugins will be configured and activated to match what was in the old server.
  3. Copy the old image uploads directory to the new server. Supposing that the WordPress on the old server is located at “/home/username/public_html/wordpress”, run the following commands on the new server:
    sudo scp -r username@oldserver:/home/username/public_html/wordpress/wp-content/uploads /var/www/wordpress/wp-content/
    sudo chown -R www-data:www-data /var/www/wordpress/wp-content/uploads

    Note: If the old server uses a custom SSH port number, scp will require the custom port number as a “-P” input parameter; for example, “sudo scp -r -P 2222 username@oldserver…”.

  4. Export the WordPress database from the old server using the recommended phpMyAdmin interface (which generates a more human-friendly SQL output than mysqldump) or by running the following command on the old server:
    mysqldump -u oldusername -p olddatabasename > wordpress.sql
  5. Before importing the WordPress database into the new server, we will need to change references to the image uploads directory (and other directories) in the exported SQL file. If you don’t make this change, then images may not be visible in the WordPress postings. Following the example above, replace any occurance of “/home/username/public_html/wordpress/” with “/var/www/wordpress/” in the exported database SQL file.
  6. Copy the exported SQL file to the new server, say to the “/tmp” directory.
  7. On the new server, run these commands:
    # Open up MySQL command shell.
    mysql -u root -p

    # Empty the existing WordPress database.
    mysql> drop database wordpress;
    mysql> create database wordpress;

    # Exit the MySQL command shell.
    mysql> quit

    # Import the exported SQL file.
    mysql -u root -p wordpress < /tmp/wordpress.sql

    Note: Dropping and re-creating the WordPress database does not affect the WordPress database user and its privileges.

Browse to your new server’s IP address and you should see your WordPress website. Unfortunately, we cannot verify that the images are loading correctly on the new server because the image URLs use the domain name which points at the old server (the images are loaded from the old server, not the new server). We now need to point the domain at our new server.

Migrate Domain

I used Digital Ocean’s DNS (Domain Name System) “Add Domain” tool to create an A (Address) record (“@” => “my_ip_address”) linking mydomain.com to the new server’s IP address. I also added a CNAME (Canonical name) record (“www” => “@”) to have www.mydomain.com point at mydomain.com. I tested whether Digital Ocean’s DNS servers were updated or not by repeatedly running one of these two commands on the server (or any Linux machine):

nslookup mydomain.com ns1.digitalocean.com
nslookup www.mydomain.com ns1.digitalocean.com

Note: Digital Ocean’s DNS servers took about 20 minutes to update.

Once Digital Ocean’s DNS servers had lookup entries for my domain name, I went to my domain registrar and updated my domain’s name servers to point at Digital Ocean’s DNS servers. To test whether Digital Ocean’s DNS servers were being used or not, I occasionally ran the following commands on my client machine to check the IP address returned:

nslookup mydomain.com
ping mydomain.com

Once the new IP address was returned consistently (it took 24 hours before my internet provider’s DNS servers were updated), I then browsed to mydomain.com and checked that the images were loading correctly.

You can empty the DNS caches on your machine and browser by using any of these commands:

# On Windows:
ipconfig /flushdns

# On Mac OS X:
sudo dscacheutil -flushcache

# On Chrome, browse to URL below and click on the "clear host cache" button.
chrome://net-internals/#dns

If you want to check the DNS change propagation across the whole world, try the What’s My DNS? website. It will make calls to DNS name servers located all over the planet.

At this point, I have a working VPS that is reasonably secured and successfully hosting my migrated WordPress website. Some rough page load tests resulted in 0.5-1 second load times, as opposed to the 2-4 seconds on the old server (which was shared web hosting with a LAMP stack). I hope that this guide will help you should you decide to move your WordPress website to an unmanaged VPS.

See my followup post, Nginx Multiple Domains, Postfix Email, and Mailman Mailing Lists, to configure Nginx to support multiple domains, get Postfix email flowing, and get Mailman mailing lists working.

Most info above is derived from the following sources:

4 Comments

Copy Table From Acrobat X to Excel

Windows No Comments

In Adobe Acrobat 9, I was able to select a table, right-click on the selection, choose “Copy As Table…”, and then paste that table into Excel (right-click on a cell, select “Paste Special…”, and choose “XML Spreadsheet”). In Acrobat X, the same actions result in a one row table in Excel with the column values from all rows concatenated together. The resulting table is useless.

acrobatx_table_copyAcrobat X also provides alternative options to “Save As Table…” and to “Open Table in Spreadsheet”, but those do not work either. They result in the same one row table.

The best guess as to the cause is that Acrobat X incorrectly interprets the table tags. This problem has been known since at least the beginning of 2011 but unfortunately, Adobe has not provided a fix in even the most recent version of Acrobat X. While there are several solutions, including manually altering the table tags (so that Acrobat X would recognize the table) or exporting first to a Word document then doing the copy-paste from Word to Excel, the simplest solution I found was to copy the table column by column to Excel.

To select just the column in Acrobat X, hold down the Alt button and click-drag to highlight one table column. Use the normal copy command (Ctrl-C or right-click on the selection and choose Copy) to copy the column. Do not use the “Copy As Table…” command because it may result in corrupted or missing column values. In Excel, use the normal Paste command (Ctrl-V or right-click on the first field and choose Paste); actually, the “XML Spreadsheet” special paste option won’t be available for selection at all. Repeat the above for each of the remaining columns.

I found the column-by-column copy-paste approach to work great for small tables. If you have a huge table to copy, you may wish to use another method (for example, converting the PDF to Word and then copy-pasting the table from Word to Excel).

The copy-paste table methods described above and others were found in the comments at How to copy/Paste a table from PDF to Excel using AcrobatX.

No Comments

« Previous Entries