bookmark_borderPC Build: Intel Core i7 Sandy Bridge

Recently I decided it was time to upgrade my main desktop computer. My current system, featuring an Intel DP55KG motherboard (P55 Chipset/Socket 1156), Intel Core i7-860 (Lynnfield) processor, AMD Radeon HD 5870 GPU, and 8 GB of DDR3-1600 Mushkin RAM, had served me well, but I was anxious to give Intel’s “Sandy Bridge” architecture a try. This post will document my upgrade, starting with the parts I selected and why; the assembly of the system and the challenges I encountered; and finally, a few thoughts on overclocking the upgraded system.

The Parts

As I stated in my previous posts concerning my Lynnfield build, I’ve built a good many PCs over the years. My goal now, as it was then, was to use the best quality components I could find for a low price, and build a good, fast, and reliable machine. In other words, build a machine that’s a good value.

The Case – Thermal management in Intel processors have progressed to the point where unless you’re planning on doing some serious overclocking there’s really no reason to consider using water cooling over a good air cooling solution. Consequently, I was looking for a mid-tower case with good air flow. Since the NZXT Tempest case I used for my Lynnfield build had served me well in this regard, I decided to go with the next generation of this case, the NZXT Tempest Evo. Among other things, this case features dual 120mm intake and 140mm exhaust fans, with an additional side 120mm fan and rear 120mm fan, making it one of the better cases for air cooling. Another nice feature that was added to this case is a slightly wider side panel design, thus increasing the available space behind the motherboard for cable routing, as well as a punchout behind the processor so one doesn’t have to completely remove the motherboard from the case in order to replace the heatsink and fan.

The Power Supply – I’ve almost exclusively used power supplies from two manufacturers: Fortron for lower cost builds and PC Power & Cooling. This time however, I decided to go with the Corsair TX750. This is a slightly less expensive unit than a comparable one from PC Power & Cooling, but Corsair does maintain quality products and this unit still features a dedicated, single +12V rail for maximum and efficient power distribution.

The Processor – With Sandy Bridge Intel launched with no less than 29 different SKUs (15 for mobile and 14 for desktop) once again presenting a very challenging decision for the gamer/enthusiast building a new desktop system. After doing some research and giving it much consideration, I chose the Intel Core i7-2600k processor with a 3.4 GHz core clock, 8 MB of L3 cache and hyper-threading. Besides the featuring the highest core clock among the Sandy Bridge desktop processors, the “K” suffix means the “turbo mode” multipliers, from 16x all the way up to 57x, are fully unlocked, giving this processor a lot of overclocking potential.

The Motherboard – I’ve traditionally used ASUS motherboards but then began to run into reliability problems with them over the years. I also grew tired of the growing list of “features” their boards began to offer that I had no use for on my desktop machines (e.g. WiFi, Bluetooth, etc.), resulting in time spent maintaining drivers for these features or trying to disable them altogether. For my Lynnfield build I used Intel’s DP55KG and really liked it. No it didn’t have all the features and overclocking capabilities of say an ASUS or Gigabyte motherboard at the time, but it turned out to be sufficiently overclockable for my needs and has been 100% reliable. However, The Sandy Bridge processor ushered in yet one more socket change by Intel, Socket 1155, taking the option to use my existing Socket 1156-based Intel DP55KG motherboard off the table. Even so, given my overall good experience with Intel motherboards, I initially decided to go with them again for this build and chose their DP67BG. However, no sooner had this board arrived than Intel announced that they had discovered a design error in the P67 chipset. In some cases, the Serial-ATA (SATA) ports within the chipset may degrade over time, potentially impacting the performance or functionality of SATA-linked devices such as hard disk drives and DVD-drives. Since I had just purchased the product I returned it as defective for a full refund. Then I waited for Intel to roll out the next (so called “B3”) version of this board. Unfortunately that day never came, and Intel showed no sign that they would upate this product, so I once again looked to ASUS to meet my needs and ended up selecting their Sabertooth P67 board, my first I should add that featured a Unified Extensible Firmware Interface (“UEFI”). I also purchased a small 50 mm fan from Evercool to help improve upon the board’s “TUF Thermal Armor” cooling capabilities.

The Heatsink – I chose the Cooler Master Hyper 212 after doing a bit a research to make sure it would clear the surrounding components on the motherboard, including the RAM. To improve its already very good cooling capabilities, I purchased an additional 120 mm fan from Cooler Master that matched the fan the product shipped with and set it up in a push/pull configuration. This fan configuration combined with the NZXT case should provide excellent overall processor cooling. Finally, to ensure that both fans would rotate at reasonably the same speed, I used a PWM splitter from Rosewill to power and control both fans from the processor fan header.

The RAM – I was looking for an 8 GB DDR3-1600 dual kit (2 x 4 GB) with the timings as low as possible. Another factor that I was glad I considered ahead of time was whether the RAM would fit under the processor’s fan/heatsink due to the close proximity of the RAM slots to processor. I ended up eliminated a couple of products (Corsair’s “Dominator” as an example) because they were too tall to fit. I ended up selecting G.Skill’s RipjawsX DDR3-1600 8 GB dual kit, which runs at 1.65v with timings that spec at 7-8-7-24-2N. I had heard and read good things about G.Skill memory and was anxious to finally have the opportunity to give one of their products a try.

The Graphics – I have no particular allegiance to either AMD or Nvidia and was willing to go with a product from either depending on its price versus performance. I ended up going with AMD this time around and chose the MSI R6950 Twin Frozr II OC. For ~$290, I felt it provided the best performance for the money.

The Hard Drives – When I built my Lynnfield-based rig, solid state drive (SSD) prices were still relatively high compared to conventional spindle drives and firmware support for features like “trim” so fluid I decided to stick with with my trusty Western Digital “Raptor” drives. SSD prices have since dropped and trim support is now fully implemented so I decided it was time to upgrade. While I wanted to spring for a new SATA revision 3.0 (SATA 6 Gb/s) drive, I could not justify the cost at this time, so I chose instead the OCZ Vertex 2 80 GB SATA revision 2.0 (SATA 3 Gb/s) SSD. This will serve as my system drive, and a Western Digital Caviar Black 1TB 7200 RPM 64MB cache SATA 3.0 drive will hold my data.

The Optical Drive – Believe it or not I actually still use one of these :). I didn’t spend much time shopping for it though, instead I went to Newegg, navigated to CD/DVD Burners, selected “Best Rating” from among the search options, and dutifully paid for the ASUS DRW-24B1ST.

The Operating System – Not much of a surprise here, I went with Windows 7 Pro 64-bit. Why the Pro version instead of Home Premium? Remote Desktop. Home Premium doesn’t support it and I need this feature so I can access this machine remotely. Why Windows and not Linux? Games. I’m a PC gamer and this is my gaming rig (there are many like it but this one’s mine…). Someday perhaps computer gaming on *nix-based systems will be a viable option. No one hopes that day will will come more than I do, but it’s not today.

The Build

With the component selection and purchase out of the way it was time to put them all together. I like to build systems outside of the the case. Then, when I’m sure everything is running well, I’ll place the components in the case and dress up the wiring (See Figure 1).

Screenshot of my Core i7 Sandy Bridge build outside of the computer case

Figure 1

The SATA 2.0 and SATA 3.0 ports are mounted horizontally, facing the back of the case, instead of vertically, making it much easier to connect/disconnect disk drives with the graphics card in place. I placed the 80 GB OCZ Vertex 2 drive, which will hold the operating system, on the P67 controller’s SATA 3.0 GB port 1, and the 1TB WD Caviar Black drive on the SATA 3.0 port 2. These are the Brown SATA ports on the ASUS Sabertooth P67 motherboard.

The Sabertooth P67 is equipped with what ASUS calls “TUF Thermal Armor,” a rather fancy term for what is essentially a heatsink that encompasses nearly the entire motherboard. The idea behind this somewhat unorthodox design is to conduct the hot air generated by cards and components out of the case through special air flow channels, thus reducing the overall temperature of the motherboard, and by extension the inside the the PC case. To do this effectively, however, ASUS recommends that system builders use a processor fan that directs air downward into the motherboard. Unfortunately, like most processor fan/heatsink products made for the PC enthusiasts market, the Cooler Master Hyper 212 Plus is mounted vertically, directing air out the back of the case, not downward towards the motherboard’s components. In anticipation of the situation, ASUS provides a spot on the motherboard that allows one to mount a small 50mm “assistant” fan, in order to improve the air flow through the TUF Thermal Armor (See Figure 2).

Screenshot of the 50 mm fan from Evercool installed on the ASUS P67 motherboard

Figure 2

After successfully assembling the components, and firing up the system without issue, I proceeded to update the P67 Sabertooth’s UEFI firmware to the latest version. Fortunately ASUS makes updating the firmware incredibly easy, offering a number of ways to perform the task, including directly from Windows. The one that worked best for me was to perform the update directly from within the UEFI. First, I inserted a USB drive contained the latest UEFI ROM file into a USB port. Then I entered the “Advanced Mode” of UEFI, navigated to the “Tool” menu, and selected “ASUS EZ Flash Utility.” I highlighted the USB drive containing the ROM file and selected “Enter” to proceed with the UEFI firmware update.

With updated UEFI firmware now place, it was time to explore the UEFI settings, ensuring that my drives were detected correctly, disabling unwanted features, etc. One issue I noticed fairly quickly was that my processor temperature was idling at ~42C at a room temperature of ~21C, a little hotter than what I would expect when using the Cooler Master Hyper 212 Plus. It should be noted here that on the P67 Sabertooth, the processor temperature reading in the UEFI can be anywhere between ~5C – ~10C higher than in Windows. That’s normal as the UEFI does not send idle commands to the processor, as is the case when the OS is running. Investigating this issue further led me to this Benchmark Reviews article discussing the best thermal paste application methods for heat-pipe direct touch coolers. After a bit of experimentation, I obtained the lowest processor temps using my Cooler Master Hyper 212 Plus by applying two thin lines of Artic Silver 5 thermal compound to the two center mounting base partitions (See Figure 3), lowering my processor temps to ~37C in UEFI and ~31 when measured from within Windows using Real Temp.

Screenshot showing where to apply thermal compound on the Cooler Master Hyper 212 plus heatsink

Figure 3

Windows Professional (x64) installed without issue on the OCZ Vertex 2 drive, but this is where my progress came to a temporary halt. One of the first applications I typically install shortly after I finish installing the OS is CPU-Z in order to evaluate processor core stepping and core voltage; internal and external processor clocks and clock multipliers; and memory frequency and timings. CPU-Z showed that the processor multiplier was “stuck” at 1.6 GHz (multipler = 16x), never allowing the processor to rise to its specified default operating frequency of 3.4 GHz (multiplier = 34x) or its default Turbo mode frequency of 3.8 GHz (multiplier = 38x). After trying numerous UEFI tweaks, I gave up and concluded that the board was defective. The replacement board was received a week or so later and after reassembling the requisite parts it worked as designed.

Another issue I encountered involved updating the firmware on the OCZ Vertex 80 GB drive to the most recent version – 1.33 at the time of this post. The OCZ Toolbox firmware update tool did not see the SSD drive because the tool is currently incompatible with the Intel Rapid Storage Technology (“RST”) driver software – V10.0.0.1046 at the time of this post. The work around for this problem was to remove the Intel RST driver, update the firmware, then reinstall the Intel RST driver. Incidentally, I took the opportunity to run a few benchmark tests against this drive using ATTO Disk Benchmark and came away with the 272 MB reads and 258 MB writes (128 KB transfer size).

With all of the device drivers installed Window’s Device Manager was still indicating that a driver for the “PCI Simple Communications Controller” was still missing (The dreaded yellow exclamation mark). It turned out that I needed to download and install Intel’s Management Engine Interface utility from ASUS in order to clear that error.

Finally, to improve the reliability of the P67 Sabertooth’s Wake on Lan feature, I decided to download and install the Intel drivers for the Sabertooth P67’s 82579V Ethernet network interface controller. Then, I entered the UEFI’s Advanced mode, navigated to Advanced -> APM and enabled “Power On By PCI.”

With these issues out of way I continued on, updating Windows, adding applications, tweaking options and generally letting the system burn-in for a bit. Then it was time to move on to overclocking the system.

The Overclock

With the advent of Intel’s “Turbo Mode” feature starting in its Nehalem “Bloomfield” processor, the Base Clock (BCLK) multiplier value was able to automatically increase beyond its default value if the processor was operating within what the design considered to be a safe temperature tolerance. For example, in the Bloomfield architecture, processors were allowed to raise the stock multiplier value by 1 or 2 depending on the number of cores being used so long as the processor’s core temperatures did not rise beyond an arbitrary threshold. Intel’s Lynnfield processors generally ran cooler and so were allowed to be considerably more aggressive with Turbo Mode, increasing Turbo Mode multipliers within a range of ~2-5. In practice what this meant was that when fewer processor cores were in demand by a given application or process, larger multiplier values were used, thus allowing the processor to temporarily run at a higher clock rate than the default multiplier would normally allow. In the case of my Core i7 860 for example, with its BCLK of 133 MHz, it was not uncommon to see it use a multiplier value of 26 in single-threaded applications, yielding a processor speed of 3.46 GHz, well above its stock speed of 2.8 GHz when using the default multiplier of 21.

Consequently, when it came to overclocking, the Lynnfield architecture offered the user somewhat of a choice. You could attempt to overclock the system with Turbo Mode enabled, requiring you to be mindful of the headroom necessary when higher turbo multiplier values kicked in, or you could simply disable Turbo Mode and go with the more traditional overclocking approach. Either way, the steps were similar: adjust the BCLK to achieve your desired processor frequency; adjust the memory multiplier to compensate for the change in BCLK; and, if necessary, adjust the processor voltage, memory voltage, and Uncore voltage to stabilize the system; rinse and repeat.

The new Sandy Bridge technology, however, is a bit more challenging when it comes to overclocking. The new 100 MHz BCLK of Sandy Bridge processors doesn’t give users a lot of latitude in terms of increasing its value. If you’re lucky you can get it to run reliably at say 110 MHz. Multiply that value with your maximum Turbo Mode multiplier value, 38 in the case of the Core i7 2600K, and you’ll achieve ~4.2 GHz. Fortunately, with the K series processor, your overclocking options aren’t limited by the BCLK value; you’re also offered an unlocked multiplier ranging from 34 to 57, allowing you to potentially reach much higher processor speeds when operating in Turbo Mode.

Not unlike most of the “enthusiasts” motherboards on the market today, the Sabertooth P67 offers a method to automatically overclock your system, dispensing with the need in most cases to independently adjust BCLK, memory and voltage settings. In fact, the Sabertooth P67 offers two methods: one is available by navigating to the in the UEFI’s “EZ Mode” settings and selecting the “Performance” option. The other is available by navigating to Advanced Mode -> Ai Tweaker and selecting “OC Tuner.” I decided to give the EZ Mode Performance option a go and was quite happy with the results (See Figure 4).

Screenshot of the ASUS Sabertooth P67 Ai Tweaker settings after invoking the "Performance" option in EZ Mode

Figure 4

The BCLK was increased to 103 MHz and the Turbo Mode multiplier for all four cores to 43. This resulted in an overall processor speed of ~4.4 GHz when running in Turbo Mode. My DDR3-1600 memory settled at a speed of 1648 MHz. None of this is going to set any overclocking records, but you know what? It’s plenty fast for me for the time being, with plenty of headroom to make further increases at a later time if desired.

Next, I adjusted my memory timings to the 7-8-7-24 and ran Memtest86+ for a couple of passes to ensure those timings were stable. Then I ran the 64-bit version of Prime 95 using the “Large in-place FFT” setting for ~24 hours to ensure that the system stability and maximum processor core temperatures were kept in check. I should note that ambient room temperature during the Prime 95 testing was ~21 C. The tests resulted in no errors and the maximum processor core temperatures were ~65 C. (See Figure 5).

Screenshot of my desktop showing Prime 95, CPU-Z and Real Temp running simultaneously

Figure 5

Conclusion

Intel’s Core i7 2600K processor and ASUS’s Sabertooth P67 motherboard turned out to be a good mid-range combination. Throw in the MSI R6950 Twin Frozr II graphics card and the 8 GB DDR3-1600 dual kit from G.Skill and I couldn’t be more pleased with the results of my first Sandy Bridge build. Since its completion, the system has been 100% stable. Future plans for this system likely include replacing the 80 GB SATA 2.0 drive with a SATA 3.0 drive, and of course do a bit more overclocking.

References
https://www.asus.com/us/Motherboards/SABERTOOTH_P67/

bookmark_borderUse pfSense as a NTP Server

(20170504 — The steps in this post were amended to address changes in recent versions of software — iceflatline)

In a previous post, I described how to install and setup pfSense in a home network. In this post I will describe how to configure pfSense to act as an Network Time Protocol (“NTP”) server, as well as how to configure various hosts in your network to synchronize their local clock to this server.

pfSense is a customized version of FreeBSD tailored specifically for use as a perimeter firewall and router. It can be managed almost entirely from a web-based GUI. In addition to being a firewall and routing platform, pfSense includes a long list of other features, as well as a package system allowing its capabilities to be expanded even further. pfSense is free, open source software distributed under the BSD license.

Originally designed by David L. Mills of the University of Delaware circa 1985, NTP is a protocol for synchronizing the clocks of computer systems over packet-switched, variable-latency data networks, and one of the oldest Internet protocols still in use. NTP uses User Datagram Protocol (UDP) port number 123. pfSense uses OpenNTPD, a free, easy to use implementation of NTP.

The versions for the software used in this post were as follows:

  • FreeBSD 11.0-RELEASE
  • pfSense 2.3.3-RELEASE-p1
  • Ubuntu 16.04.2 LTS
  • Windows 7 Professional (x64)

Configure OpenNTPD in pfSense

Before configuring the OpenNTP server, it’s a good idea to ensure that pfSense itself is keeping accurate time. The best way to do that is to have it synchronize its clock with one or more remote NTP servers. First though, you should make sure that the clock in the machine hosting pfSense is set to something close to accurate – if the difference is too great, pfSense will not synchronize properly with the remote NTP server.

Login to the pfSense machine using its “webConfigurator” (webGUI) interface. Navigate to System->General Setup and select the timezone that matches the physical location of the pfSense machine from among the options under “Timezone.” Next, enter the host name or IP address for the remote NTP server under “Timeservers” (Remember to add at least one DNS server under “DNS Servers” if you decide to use a host name instead of an IP address). Most likely you’ll find this field is already populated with one or more default remote NTP server(s) such as 0.pfsense.pool.ntp.org, 1.pfsense.pool.ntp.org, etc. These servers will work just fine in most cases, however you may get more accurate results if you use one of the continental zone servers (e.g., europe., north-america., oceania., or asia.pool.ntp.org), and even more accurate time if you choose to use one of the country zone servers (e.g., us.pool.ntp.org in the United States). For all of these zones, you can use the 0, 1 or 2 prefix, like 0.us.pool.ntp.org, to distinguish between servers from a particular region or country. (See the NTP Pool Project web site for more information on how to use pool.ntp.org servers). Like 0.pfsense.pool.ntp.org, these server entries will pick random NTP servers from a pool of known good ones. Also, while one NTP server is sufficient, you can improve reliability by adding more. Just make sure their host names or IP addresses are separated by a space.

Now that the pfSense machine is on its way to keeping accurate time, let’s configure its OpenNTPD server. Navigate to Services->NTP and pick which interface OpenNTPD should listen on. This will typically be the “LAN” interface. When complete select “Save.” The OpenNTPD server will start immediately, however there may a delay of several minutes before it is ready to service NTP requests as it must first ensure that its own time is accurate. That’s all there is to it. You’ll find the OpenNTPD logs under by selecting the ‘NTP” tab under Status->System Logs->NTP.

Configure Hosts

    Windows

After configuring the OpenNTPD server in pfSense, let’s configure a Windows host to synchronize its local clock to this server. Right-click on the time (usually located in the lower right corner of the desktop) and select “Adjust date/time.” Select the “Internet Time” tab, then select “Change settings.” Check the “Synchronize with an Internet time server” box, enter the host name or IP address of the pfSense machine, then select “Update now.” It’s not uncommon to get error message the first time you attempt to update. Wait a few seconds and try again; you should receive a “The clock was successfully synchronized…” message.

    Linux

Many Linux distributions feature two utilities to help the local clock maintain its accuracy: ntpdate and/or ntpd (Note: Linux systems using systemd-timesyncd are discussed below). The ntpdate utility is typically included in Linux distributions as a default package, but if yours does not you can install it using your distribution’s package manager, for example:

ntpdate typically runs once at boot time and will synchronize the local clock with a default NTP server defined by the distribution. However what if this machine isn’t rebooted often, say in the case of a server for example? And what about using the OpenNTPD server in the pfSense machine? We can address both of those issues by occasionally running ntpdate using the following command. For this and subsequent examples we’ll assume the IP address assigned to the LAN interface in the pfSense machine is 192.168.1.1:

Perhaps a more effective approach though is to use cron, a utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon. These tasks are typically referred to as “cron jobs.” A “crontab” is a file which contains one or more cron job entries to be run at specified times. You can create a new crontab (or edit an exiting one) using the command crontab -e under your user account. Because ntpdate needs to be run by the system’s root user we’ll create the crontab using the command sudo crontab -e. Here’s some example cron job entries using the ntpdate command. Note that cron will attempt to email to the user the output of the commands it runs. To silence this, you can redirect the command output to /dev/null:

While using ntpdate will certainly work well, the utility ntpd on the other hand is considerably more sophisticated. It continually runs, calculating the drift of the local clock and then adjusting its time on an ongoing basis by synchronizing with one or more NTP servers. By default ntpd acts as a NTP client, querying NTP servers to set the local clock, however it also can act as a NTP server, providing time service to other clients. Alas though, nothing is free, and using ntpd will result in yet one more system process that may not otherwise be running in your system, consuming both CPU and memory resources, albeit modestly.

Like ntpdate many Linux distributions include ntpd by default. If yours does not, you can install it using your distribution’s package manager, for example:

The installation process should add ntpd to the requisite run levels and start its daemon automatically. Now we need to configure it so that it acts as a NTP client only and not as a NTP server. After all, that’s what we’re going to use the pfSense machine for right?

ntpd is configured using the file /etc/ntp.conf. Open this file and comment out the existing ntp pool servers, then add the IP address assigned to the LAN interface on the pfSense machine. Again, we’ll assume this address is 192.168.1.1. Appending the “iburst” option to the address will provide for faster initial synchronisation. We’ll also want to comment out the NTP server-related options that allow the local machine to exchange time with other host devices on the network and interrogate the local NTP server. The remaining options can remain at their default settings:

A comment about the driftfile option: this file is used to store the local clock’s frequency offset. ntpd uses this file to automatically compensate for the local clock’s natural time drift, allowing it to maintain reasonably accurate time even if it cannot communicate with the pfSense machine for some period of time.

While not required, you can run the ntpdate command one time to fully synchronize the local clock with the OpenNTPD server in pfSense, then restart ntpd.

In recent Linux releases using systemd-timesyncd, timedatectl replaces ntpdate and timesyncd replaces the client portion of ntpd. By default timedatectl syncs the time by querying one or more NTP servers once on boot and later on uses socket activation to recheck once network connections become active. timesyncd on the other hand regularly queries the NTP server(s) to keep the time in sync. It also stores time updates locally, so that after reboots monotonically advances if applicable.

The NTP server(s) used to sync time for timedatectl and timesyncd from can be specified in /etc/systemd/timesyncd.conf. By default the system will rely upon the default NTP server(s) defined by the distribution and specified by FallbackNTP= in the “[Time]” section of the file. To use the OpenNTPD server in the pfSense machine, uncomment NTP= and append the IP address assigned to the LAN interface on the pfSense machine. Again, assuming this address is 192.168.1.1:

You can specify additional NTP servers by listing their host names or IP addresses separated by a space on these lines. Now run the following command:

The current status of time and time configuration via timedatectl and timesyncd can be checked with the command timedatectl status:

    FreeBSD

ntpdate and ntpd are installed in FreeBSD by default, however ntpd is not configured to start at boot time.

Similar to Linux, we can run ntpdate manually as root to synchronize with the OpenNTP server running in the pfSense machine. Again, we’ll assume 192.168.1.1 is the IP address assigned to the LAN interface on the pfSense machine.

ntpdate can also be made to run at boot time in FreeBSD by using the sysrc command to add the following lines to /etc/rc.conf:

We can also use cron under FreeBSD to run ntpdate. You can create a new crontab (or edit an exiting one) using the command crontab -e as root. The example cron job entries provided above under Linux will also work under FreeBSD.

To enable ntpd under FreeBSD, open /etc/ntp.conf and comment out the existing ntp pool servers and add the IP address assigned to the LAN interface on the pfSense machine. The remaining options can remain at their default settings:

Now use sysrc to add the following two lines to /etc/rc.conf. The first line enables ntpd. The second tells the system to perform a one time sync at boot time:

Run the ntpdate command one time to synchronize the local clock with the OpenNTPD server in pfSense, then start ntpd.

Conclusion

This concludes the post on how to how to configure your pfSense machine to also act as a NTP server. The OpenNTPD service in pfSense will listen for requests from FreeBSD, Linux and Windows hosts and allow them to synchronize their local clock with that of the OpenNTPD server in pfsense. Using pfSense as a NTP server in your network ensures that your hosts always have consistent accurate time and reduces the load on the Internet’s NTP servers. Configuring Windows hosts to utilize this server is straightforward, while configuration under FreeBSD and Linux requires a bit more work.

References

Buechler, C.M. & Pingle, J. (2009). pfSense: The definitive guide. USA: Reed Media

http://www.openntpd.org/

http://www.pool.ntp.org/en/

http://www.ntp.org/documentation.html

http://support.ntp.org/bin/view/Support/WebHome

http://www.marksanborn.net/linux/learning-cron-by-example/

https://help.ubuntu.com/community/CronHowto

https://help.ubuntu.com/lts/serverguide/NTP.html

bookmark_borderSecure Remote Access To Your Home Network Using pfSense and OpenVPN

(20180430 – The steps in this post were amended to address changes in recent versions of software. Minor editorial corrections were also made — iceflatline)

In my previous post, I described how to install and setup pfSense in a home network and offered some configuration recommendations based on my own experiences. In this post, I will describe how to set up Virtual Private Network (“VPN”) access in pfSense using OpenVPN. Once configured, you’ll be able to use an OpenVPN client in Windows or Linux to securely access your home network remotely using either X.509 PKI authentication (public key infrastructure using X.509-based certificates and private keys) or pre-shared private key authentication.

pfSense (i.e., “making sense of packet filtering”) is a customized version of FreeBSD tailored specifically for use as a perimeter firewall and router, and can be managed entirely from a web-based or command line interface. In addition to being a firewall and routing platform, pfSense includes a long list of other features, as well as a package system allowing its capabilities to be expanded even further. pfSense is free, open source software distributed under the BSD license.

OpenVPN is a lightweight VPN software application supporting both remote access and site-to-site VPN configurations. It uses SSL/TLS security for encryption and is capable of traversing network address translation devices and firewalls. The OpenVPN community edition is free, open source software and portable to most major operating systems, including Linux, Windows 2000/XP/Vista/7, OpenBSD, FreeBSD, NetBSD, Mac OS X, and Solaris. It is distributed under the GPL license version 2.

The versions for the software used in this post were as follows:

  • easy-rsa for Ubuntu, easy-rsa_2.2.2-2_all.deb
  • OpenVPN for Ubuntu, openvpn_2.4.4-2ubuntu1_amd64.deb
  • OpenVPN for Windows, 2.4.6
  • pfSense 2.4.3
  • Ubuntu Server 18.04 LTS
  • Windows 7 Professional

OpenVPN Authentication

Before walking through the steps necessary to configure the OpenVPN server in pfSense and OpenVPN client software in Windows and Linux, let’s discuss the differences between the two methods used by OpenVPN in most situations to authenticate peers (clients and servers) to one another: X.509 PKI and pre-shared private keys.

In the X.509 PKI authentication method, the private keys of the peers are kept secret and its public key made publicly available via “certificates” based on the ITU-T X.509 standard. The goal of a certificate is to certify that a public key belongs to the person or entity claiming to be its owner, or more accurately, the person/entity owning the corresponding private key. To achieve this certification, a certificate is signed by an authority that can be trusted by everyone: the Certification Authority (“CA”). In OpenVPN, a CA certificate and its corresponding private key is generated locally, as well as individual certificates and private keys for the OpenVPN server (located in pfSense in our case) and each OpenVPN client. The CA and its associated private key are then used to sign the server and client certificates during the process of generating them. Then, when a VPN session is established, both server and client will authenticate the other by, among other things, verifying that the certificate each presents to the other was signed by the CA.

The X.509 PKI method has a number of advantages compared to the pre-shared key approach. First, the server only needs its own signed certificate and private key – it doesn’t need to know anything about the certificates associated with the client(s), only that they were signed by the CA certificate. Second, because the server can perform this signature verification without needing access to the CA private key, the CA key can be stored somewhere safe and secure. Finally, individual clients can be disabled simply be adding their certificates to a CRL (“Certification Revocation List”) located on the server. The Certification Revocation List allows compromised certificates to be selectively rejected without requiring that the entire PKI be rebuilt. This is the primary reason that X.509 PKI is considered the preferred method for implementing remote client access using OpenVPN – the ability to revoke access to individual machines. Alas though, there are some issues to be considered when using this approach. One of course is the integrity of the CA private key, which ensures the security of the PKI. Anyone with access to the this key can generate certificates to be used to gain VPN access to a network, so it must be kept secure and never distributed to clients or servers. Another is that creating and managing OpenVPN certificates and private keys may be a bit tedious, particularly if all you want to do is setup personal VPN access to your home network.

In the pre-shared key authentication method, a single static 2048-bit private RSA key is generated and copied to the OpenVPN server and client. This shared key approach is typically used for site-to-site connections involving, say, two pfSense machines located at a main office and a remote office, with one acting as the OpenVPN server and the other as the client. However, it also offers the simplest setup for getting a VPN connection to your network up and running quickly with minimal configuration. There are some shortcomings with this method though that should be considered before using it. First, it doesn’t scale terribly well. Each client needing VPN access must have a unique OpenVPN server and TCP or UDP port defined in pfSense. This can be a pain to manage as the number of clients grows. This pain can be blunted a bit if you issue the same shared key to each client, but if the key is compromised, a new key must be generated and securely provided to the OpenVPN server and all clients. Another problem with this method is that these keys exist in plaintext on each OpenVPN client (it also exists in plain text in pfSense but presumably access to that machine is further restricted) resulting in security that’s perhaps less than desirable.

In summary then, the X.509 PKI method offers scalability and arguably better security, but may be cumbersome for some to setup and manage; while the pre-shared key method is easier to setup, and likely just fine for implementations with a limited number of remote VPN clients. The remainder of this post will discuss how to configure a VPN using either method, but you’ll need to determine which approach best meets your needs.

Installing OpenVPN

OpenVPN comes pre-installed in pfSense so we’ll begin by installing OpenVPN on Windows and Linux, then use it to generate the necessary client and server keys and certificates. OpenVPN provides a set of batch files/scripts based on OpenSSL collectively called “easy-rsa” that will make the task of generating these certificates and keys much easier. To help explain the steps involved, we’ll generate the following certificates and keys:

ca.key
ca.crt
pfsense.key
pfsense.crt
bob.key
bob.crt
static-bob.key

Then we’ll copy these keys to the machines that need them and put them to work to create an OpenVPN connection to a home network that uses the subnet 192.168.10.0/24. Doing so will involve creating another subnet, 192.168.20.0/24, for our OpenVPN server and clients, and designating UDP port 13725 for the OpenVPN server to use to listen on for incoming VPN requests (See Figure 1).

Screenshot of example network using OpenVPN server and pfSense

Figure 1


    Installing OpenVPN and creating certificates and keys in Windows

OpenVPN for Windows is available from OpenVPN community downloads. During the install, accept the existing default options, and ensure that “EasyRSA 2 Certificate Management Scripts” is selected. The “Advance” section provides some usability options which you can select/deselect based on your preferences. Once installed, OpenVPN will associate itself with files having the .ovpn extension.

Now we’re ready to generate the various certificates and keys. Open a command prompt and change folders to C:\Program Files\OpenVPN\easy-rsa\. Run the following batch file, which will copy the file vars.bat.sample to vars.bat. Note that this will overwrite any preexisting vars.bat file:

Open the file C:\Program Files\OpenVPN\easy-rsa\vars.bat and take a look at the values associated with the variables KEY_COUNTRY, KEY_PROVINCE, KEY_CITY, KEY_ORG, and KEY_EMAIL. Assigning values to these variables helps speed the process of generating the various certificates and keys by initializing these common variables when the various easy-rsa batch files are invoked. You may leave them at their default values or change them if desired, however, don’t leave any blank. Now, make sure you’re at C:\Program Files\OpenVPN\easy-rsa\, then run the following batch files:

Running this sequence of commands for the first time will create the folder C:\Program Files\OpenVPN\easy-rsa\keys\, which serves to hold the certificates and keys we generate (Note that you can define the location of this folder using the variable KEY_DIR in vars.bat). You may receive a message indicating that “The system cannot find the file specified”, however this message can safely be ignored. Also note that from now on, each time you run this sequence to create new certificates and keys, any existing certificates and keys in the keys folder will be deleted.

Okay then, let’s generate the CA certificate and CA private key. The only value which must be explicitly entered when requested is for the variable “Common Name”, which is set to “openvpn-ca” in the following example. An optional “Organization Unit Name” and “Name” value is also requested and may be modified if desired:

Navigate to C:\Program Files\OpenVPN\easy-rsa\keys\, you should see the newly minted CA certificate (ca.crt) and CA private key (ca.key) files.

Next, we’ll generate a certificate and private key for the OpenVPN server that resides in pfSense. Here we’ll need to pass a text string to the batch file when invoking it. That text string then becomes the name for the server’s certificate and private key. In this example, we’ll use the text string “pfsense”. As in the previous step, most values will be automatically initialized by vars.bat. Once again, the value for Common Name must be explicitly entered, which we’ll set to “pfsense” to match the name of the server key. This batch file will also seek to define some additional optional attributes, including the “challenge password”, used in certificate revocation, which we’ll leave blank, and the “company name”, which you may fill in if desired. Finally, you’ll be asked to sign the server’s certificate using the CA certificate. Review the server certificate carefully, then select “y” to sign and then to commit the signature:

Finally, let’s generate our client certificate and key following steps similar to the ones we used above for creating the server certificate and key. This time, however, the text string we pass to the batch file will become the name for the client certificate and key. In this example, we’ll use the text string “bob”, and set the Common Name value to be the same. Review the client certificate carefully, then select “y” to sign and then to commit the signature. You’ll need to run this same batch file for each client you want to grant VPN access to – and remember, you’ll need to use a unique key name/Common Name combination for each one:

To further protect OpenVPN access, you may wish to password-protect the client’s private key. To do this we’ll need use the build-key-pass.bat batch file. When used you’ll be prompted to to enter a password that will be used in conjunction with generating the private key. Now, anyone (including you) wishing to use this key when starting the OpenVPN connection will need to enter the correct password.

That’s it for installing OpenVPN and building your X.509 PKI in Windows. If you plan to use the pre-shared private key authentication method, you need only to generate a single RSA key that will be used in both the OpenVPN server and client(s). In this example, we’ll use “static-bob” as the key file name and place it in the same folder our other certificates and keys are located:

    Installing OpenVPN and creating certificates and keys in Linux

If you’ve been following the installation and configuration of OpenVPN under Windows to this point, the steps used under Linux will seem familiar. To begin, download and install OpenVPN and easy-rsa using the distribution’s package manager. Using the Debian-based Ubuntu as an example:

Now copy the OpenVPN scripts used to generate the various certificates and keys into a new directory:

Then make a symbolic link to latest version of openssl.cnf contained in /etc/openvpn/easy-rsa. In this example the latest version is openssl-1.0.0.cnf:

We’re now ready to generate the various certificates and keys. Start by opening the file /etc/openvpn/easy-rsa/vars with your text editor and take a look at the variables KEY_COUNTRY, KEY_PROVINCE, KEY_CITY, KEY_ORG, and KEY_EMAIL. Assigning values to these variables in vars helps speed the process of generating the various certificates and keys by initializing these common variables in the various easy-rsa scripts. You may leave them at their default values or change them if desired, however, don’t leave any blank.

Change to the root user:

Now, open a terminal, and run the following:

Running this sequence of commands for the first time will create the directory /etc/openvpn/easy-rsa/keys/, which will hold the certificates and keys we generate (Not that you can define the location of this directory using the variable KEY_DIR in vars). Also note that from now on, each time you run the clean-all script any existing certificates and keys in the keys directory will be deleted.

Okay then, let’s generate the CA private key and CA certificate. The build-ca script uses the “Organization Name” value as the default value for “Common Name”; however, we’ll change this to “openvpn-ca” in the following example. Values for “Organization Unit Name” and “Name” will also be requested and may be modified if desired:

Now, if you navigate to /etc/openvpn/easy-rsa/keys/, you should see the newly minted CA certificate (ca.crt) and CA private key (ca.key) files.

Next, we’ll generate a certificate and private key for the OpenVPN server that resides in pfSense. Here we’ll need to pass a text string to the script when invoking it. That text string then becomes the name for the server’s certificate and private key. In this example, we’ll use the text string “pfsense”. As in the previous step, most values will be automatically initialized by the file vars. We’ll leave the Common Name value at the default “pfsense” to match the name of the server key. This script will also seek to define some optional attributes, including the “challenge password”, which we will leave blank, and the “company name”, which may be filled in if desired. Finally, you’ll be asked to sign the server’s certificate using the CA certificate. Review the values in the server certificate carefully, then select “y” to sign and then to commit the signature:

Finally, let’s generate our client certificate and key following steps similar to the ones we used above for creating the server certificate and key. This time, however, the text string we pass to the script will become the default name for the client certificate and key. In this example, we’ll use the text string “bob”, which will also become the default value for the Common Name. Review the client certificate carefully, then select “y” to sign and then to commit the signature. You’ll need run the this same script for each client you want to grant access to – and remember, you’ll need to use a unique key name/Common Name combination for each one:

To further protect OpenVPN access, you may wish to password-protect the client’s private key. To do this simply run the build-key-pass script instead instead of build-key. You will be prompted to enter a password that will be used in conjunction with generating the private key. Now, anyone (including you) wishing to use this key will need to enter the correct password.

That’s it for installing OpenVPN and building your X.509 PKI in Linux. If you plan to use the pre-shared private key authentication method instead, you need only to generate a single RSA key that will be used in both the OpenVPN server and client(s). In this example, we’ll use “static-bob” as the key file name and place it in the same directory our other certificates and keys are located:

* Note: when you’re finished using easy-rsa make sure to exit out of being the root user.

Configuring OpenVPN: X.509 PKI Authentication

Now that we’ve installed OpenVPN client software in Windows and Linux, and generated the various certificates and keys, let’s move on and discuss how to configure these clients and the OpenVPN server in pfSense for VPN access into our home network using the X.509 PKI authentication method. Before we do though, now would be a good time to move the file ca.key to a secure location. Recall, that the privacy of this key is what ensures the security of your entire OpenVPN PKI.

    OpenVPN Server configuration in pfSense for X.509 PKI authentication

To configure the OpenVPN server in pfSense for X.509 PKI authentication, we’ll start by importing the server certificate and private key we created, as well as our CA certificate.

Log into your pfSense box’s “webConfigurator” interface and navigate to System->Cert. Manager->CAs. To import a new CA, select the “+ Add” icon. Add a name for your CA (e.g., “OpenVPN Server CA #1”) under “Desriptive name”, then click on the drop down list under “Method” and ensure that “Import an existing Certficate Authority” is selected. Carefully and securely copy the contents of the ca.crt file from the machine you generated it on and paste it into the “Certificate data” field, then select “Save”.

Now, navigate to System->Cert. Manager->Certificates and select the “+ Add/Sign” icon to import a new certificate and its private key. Ensure that “Import an existing Certificate” is selected under Method”, then add a name for your certificate (e.g., “OpenVPN Server Cert #1”) under “Descriptive name”. Carefully and securely copy the contents of the pfsense.crt file from the machine you generated it on and paste it into the “Certificate data” field, and the contents of pfsense.key into the “Private key data” field. Finish by selecting “Save”.

The information you copy from your pfsense.crt and pfsense.key files will no longer appear in these fields after you select Save. Rest assured, however, that they are stored in pfSense. In fact, you will find them stored as server1.cert and server1.key respectively, along with server1.ca, in /var/etc/openvpn after the OpenVPN server is created.

Next we’ll configure the various OpenVPN server parameters. Navigate to VPN->OpenVPN->Server and click on the “+ Add” icon to add a new OpenVPN server (See Figure 2).

 Screenshot of pfSense configuration wizard

Figure 2

Most of the parameters will be left at their default settings, however the following will need to be configured:

Server mode – We’re using the X.509 PKI authentication method for this example so we should select “Remote Access(SSL/TLS)” here.

Local port – This is the logical port that the OpenVPN server will listen on for VPN connections. To help improve security, we’ll avoid using OpenVPN’s default port 1194 and use port 13725 instead.

Description – Enter a description here for your reference. This field is optional but helpful.

TLS Configuration – Using this feature is beyond the scope of this post so for now uncheck “Use a TLS Key”. If you would like to learn more about using TLS authentication, consult the hardening OpenVPN security section of the OpenVPN documentation.

Peer Certificate Authority – Ensure that the CA you imported under System->Cert. Manager->CA (e.g., “OpenVPN Server CA #1”) is selected here.

Server certificate – Ensure that the certificate you imported under System->Cert Manager->Certificates (e.g., “OpenVPN Server Cert #1”) is selected here.

DH Parameter Length – Change this setting to “2048 bit” to match the default DH key length OpenVPN uses to generate the keys and certificates.

Encryption Algorithm – Set this to “AES-256-GCM”, a good strong algorithm that also matches the default NCP algorithm.

IPv4 Tunnel Network – This is the IPv4 subnet from which the OpenVPN server will assign IP addresses. The server will automatically assign the first host address from this subnet to itself, while the remaining host addresses will be used for remote VPN clients. Our example VPN network will use subnet 192.168.20.0/24 so enter that here.

IPv4 Local Network(s) – This is the subnet that will be accessable from the VPN; in other words, your home network’s subnet. Our example home network has the subnet 192.168.10.0/24 so enter that here.

Compression – Setting this parameter will result in compressing the traffic traversing your VPN connection, making more efficient use of your available bandwidth. LZO and LZ4 are different compression algorithms, with LZ4 generally offering the best performance with least CPU usage. Let’s use “LZ4 Compression v2 [compress lz4-v2]”.

In addition to the parameters above, there are several others you might be interested in depending upon your specific requirements. These are optional, but may help improve your OpenVPN experience:

Dynamic IP – This setting allows connected clients to retain their connections if their IP address changes.

Provide a default domain name to clients – This setting allows you to specify a domain name to be assigned to your VPN clients. You should enter the same domain name you entered in the “Domain” field under System -> General Setup.

DNS Server enable – This setting allows you to specify one or more DNS servers to be used by the OpenVPN clients while connected. If you’re using pfSense as your DNS forwarder, then enter the pfSense LAN IP address here, else enter the IP address(s) of the DNS servers you entered in the “DNS servers” fields under System -> General Setup.

NTP Server enable – This setting allows you to specify the NTP server(s) to be used by the OpenVPN clients while connected. If you’re using pfSense as your NTP server then enter the pfSense LAN IP address here, else enter the IP address(s) of the DNS servers you entered in the “DNS servers” fields under System -> General Setup.

NetBIOS enable – If you need Microsoft’s NetBIOS protocol then selecting this will ensure it is propogated over your VPN connection.

Verbosity level – I suggest changing this to “3 (Recommended)” to reduce the number of log entries.

When you’re finished making changes, select “Save” at the bottom of the page to complete the configuration.

Next, we’ll need to create two new firewall rules to allow our incoming OpenVPN connection to pass through to the OpenVPN server on the port it’s listening on. First, navigate to em>Firewall -> Rules, select the “Add” icon to add a rule to the bottom of the list and define following parameters:

Action: Pass
Interface: WAN
Protocol: UDP
Destination: WAN address
Destination port range: Custom 13725 To Custom 13725
Description: OpenVPN

When you’re finished making changes, select “Save” and “Apply Changes”.

Now navigate to Firewall -> Rules -> OpenVPN, select the “Add” icon and define following parameters:

Protocol: Any
Description: OpenVPN

Finally, we’ll need to add the openvpn() interface, which pfSense automatically creates when it enables the OpenVPN server. Navigate to Intefaces -> Assignments and add this interface, then select “Save” (See Figure 3).

Screenshot of showing the assignment of the OpenVPN interface

Figure 3
    OpenVPN client configuration in Windows for X.509 PKI authentication

To configure your Windows OpenVPN client for X.509 PKI authentication, copy the client certificate and key, as well as the CA certificate from C:\Program Files\OpenVPN\easy-rsa\keys\ to C:\Program Files\OpenVPN\config\. You’ll recall these files were bob.crt, bob.key, and ca.crt.

Next, we need to create an client configuration file so our OpenVPN client knows how to connect to the server. Fortunately, OpenVPN includes a sample client configuration file so we don’t have to create one from scratch. Copy the file C:\Program Files\OpenVPN\sample-config\client.ovpn to C:\Program Files\OpenVPN\config\. You can rename the file if desired, however, make sure to retain the *.ovpn extension. Open this file in your text editor and modify the remote line so that it specifies either the FQDN (“Fully Qualified Domain Name”) or WAN IP address of your pfSense box, followed by the port number the OpenVPN server is listening on, which in our case is port 13725:

We need to modify the cert and key lines so they point to our certificate and key file names:

Since we’re not using a TLS key in this configuration let’s disable this feature. Comment out the line tls-auth ta.key 1:

We also need to tell the client which cryptographic algorithm to use. This should match the algorithm we configured in the OpenVPN server. Assuming you used the AES-256-GCM algorithm there then modify the cipher line so that it looks like the following:

Since we’ve enabled LZ4 v2 compression in the OpenVPN server let’s ensure that algorithm is also used in the client:

Finally, let’s uncomment the auth-nocache parameter to strengthen security a bit:

Now we’re ready to make our first OpenVPN connection. First, if you’re using Window’s firewall, make sure it is configured to allow VPN traffic to pass to/from the TAP-Windows adapter installed by OpenVPN. Now, open the Windows Start menu and select “OpenVPN”, then “OpenVPN GUI”, The OpenVPN GUI will launch and automatically minimize to the task tray. Right-click on the icon and select connect. If you elected to use a password to protect the client private key you’ll be asked to enter that password before OpenVPN will proceed with the connection (See Figure 3). After the OpenVPN client connects with the server in pfSense, the GUI will once again minimize to the task tray. Now try to ping the IP address of a host on the home network’s subnet. If the ping succeeds, congratulations! You’re now securily connected through your pfSense box to your home network using OpenVPN and X509 PKI authentication.

Screenshot of OpenVPN GUI launching VPN connection with password field

Figure 4

By the way, there are a couple of other ways you can start your OpenVPN connection in Windows. You can simply right click on an OpenVPN configuration file and selecting “Start OpenVPN on this config file.” You can also start it by using the command openvpn at a command prompt – you’ll need to be in the folder c:\Program Files\OpenVPN\config for this to work (else make the appropriate adjustments to your PATH environmental variable). Using the F4 key will terminate the connection.

    OpenVPN client configuration in Linux for X.509 PKI authentication

To configure your Linux OpenVPN client in Linux for X.509 PKI authentication, move the client certificate and key files, as well as the CA certificate from /etc/openvpn/easy-rsa/keys/ to /etc/openvpn/. You’ll recall these files were bob.crt, bob.key, and ca.crt.

Next, we need to create a client configuration file so our OpenVPN client knows how to connect to the server. Fortunately, OpenVPN for Linux includes a sample configuration file so we don’t have to create one from scratch. Copy the file /usr/share/doc/openvpn/examples/sample-config-files/client.conf to /etc/openvpn/. You can rename the file if desired, however, make sure to retain the *.conf extension. Now open this file in your text editor and make the same changes as descibed above for the OpenVPN client configuration file in Windows. Remember to save your changes. Now we’re ready to make our first VPN connection. Open a terminal and use the following commands, replacing client.conf with the name of your configuration file if you changed it:

If you elected to use a password to protect the client private key, then you’ll be asked to enter that password before OpenVPN will proceed with the connection, which will end with a “Initialization Sequence Completed” message. Now, open another terminal, and try to ping the IP address of a host on the home network’s subnet. If the ping succeeds, congratulations! You’re now securily connected through your pfSense box to your home network using OpenVPN and X509 PKI authentication.

To simplify troubleshooting, it’s best to initially start the OpenVPN client from the command line as described above. However, once you know it can reliably connect to the server, then you can start it as a daemon. However, if you’ve elected to password protect the client key (e.g., bob.key) when generating it using the ./build-key-pass script, you may need to first add the password to some type of Linux password agent. For example, to add the password to Ubuntu’s password agent systemd-tty-ask-password-agent:

Then you can start the OpenVPN daemon as follows:

When executed, OpenVPN will scan for any configuration files (i.e. *.conf) in /etc/openvpn/, and will attempt to start a separate daemon for each one it finds.

Configuring OpenVPN: Pre-shared Private Key Authentication

Having previously gone through the steps necessary to configure the OpenVPN server and clients for OpenVPN access into our home network using the X.509 PKI authentication method, it’s time now to discuss how to configure these same components for OpenVPN access using pre-shared private key authentication. Once again, to help explain the steps involved, we’ll assume there is an existing home network that is currently using the IP subnet 192.168.10.0/24; we’ll use the subnet 192.168.20.0/24 for our VPN; and, we’ll designate UDP port 13725 as the port the OpenVPN server will listen on (See Figure 1).

    OpenVPN server configuration in pfSense for pre-shared private key authentication

Log into your pfSense box’s “webConfigurator” interface navigate to VPN->OpenVPN->Server and click on the “+ Add” icon to add a new OpenVPN server (See Figure 2). Most of the options will be left at their default settings, however the following will need to be configured:

Server mode – We’re using the pre-shared private key authentication method in this example so we should select “Peer to Peer (Shared Key)” here.

Local port – This is the logical port that the OpenVPN server will listen on for VPN connections. To help improve security, we’ll avoid using OpenVPN’s default port 1194 and use port 13725 instead.

Description – Enter a description here for your reference. This field is optional but helpful.

Shared Key – Uncheck the “Automatically generate a shared key” box and then carefully and securely copy the contents of the static-bob.key file from the machine you generated it and paste it into the provided field.

Encryption algorithm – This is the cipher that OpenVPN will use to secure your VPN traffic. Since GCM encryption algorithms can’t be used with shared key server mode let’s use AES-256-CBC.

IPv4 Tunnel Network – This is the subnet from which the OpenVPN server will assign IP addresses. The server will automatically assign the first host address from this subnet to itself, while the remaining host addresses will be used for remote VPN clients. Our example VPN network will use subnet 192.168.20.0/24 so enter that here.

IPv4 Remote network(s) – This is the subnet that will be accessable from through the VPN; in other words, your home network’s subnet. Our example home network has the subnet 192.168.10.0/24 so enter that here.

Compression – Setting this parameter will result in compressing the traffic traversing your VPN connection, making more efficient use of your available bandwidth. LZO and LZ4 are different compression algorithms, with LZ4 generally offering the best performance with least CPU usage. Let’s use “LZ4 Compression v2 [compress lz4-v2]”.

Verbosity level – I suggest changing this to “3 (Recommended)” to reduce the number of log entries.

When you’re finished making changes, select “Save” at the bottom of the page to complete the configuration.

Next, we’ll need to create two new firewall rules to allow our incoming OpenVPN connection to pass through to the OpenVPN server on the port it’s listening on. First, navigate to em>Firewall -> Rules, select the “Add” icon to add a rule to the bottom of the list and define following parameters:

Action: Pass
Interface: WAN
Protocol: UDP
Destination: WAN address
Destination port range: Custom 13725 To Custom 13725
Description: OpenVPN

When you’re finished making changes, select “Save” and “Apply Changes”.

Now navigate to Firewall -> Rules -> OpenVPN, select the “Add” icon and define following parameters:

Protocol: Any
Description: OpenVPN

Finally, we’ll need to add the openvpn() interface, which pfSense automatically creates when it enables the OpenVPN server. Navigate to Intefaces -> Assignments and add this interface, then select “Save” (See Figure 3).

    OpenVPN client configuration in Windows for pre-shared private key authentication

To configure your Windows client for pre-shared private key authentication, copy the key file static-bob.key from c:\Program Files\OpenVPN\easy-rsa\keys\ to c:\Program Files\OpenVPN\config\. Now we need to create an client configuration file so our OpenVPN client knows how to connect to the server. Since OpenVPN for Windows doesn’t include a sample client configuration file for the pre-shared private key authentication method, we’ll use the following, which is based on the sample static-home configuration file included with the OpenVPN installation in Linux. Copy and paste this text into your text editor and modify the remote line so that it specifies either the FQDN (“Fully Qualified Domain Name”) or WAN IP address of the pfSense box, followed by the port number the OpenVPN server is listening on, which in our example case is port 13725. The secret line specifies the name of our static key file, which in our case is static-bob.key. Save the file as client.ovpn and copy it to c:\Program Files\OpenVPN\config\. You can rename the file if desired, however, make sure to retain the *.ovpn extension:

If you would like to specify a domain name to be assigned to your VPN client, add the following line to your client configuration file. You should enter the same domain name you entered in the “Domain” field under System -> General Setup.

If you would like to specify the DNS server(s) to be used by your OpenVPN client while connected, add the following line to your client configuration file. If you’re using pfSense as your DNS forwarder, then specify the pfSense LAN IP address here, else specify the IP address(s) of the DNS servers you entered in the “DNS servers” fields under System -> General Setup.

Now we’re ready to make our OpenVPN connection. First, if you’re using Window’s firewall, make sure it is configured to allow VPN traffic to pass to/from the TAP-Windows adapter installed by OpenVPN. Now, open the Windows Start menu and select “OpenVPN”, then “OpenVPN GUI.” The OpenVPN GUI will launch and automatically minimize to the task tray. Right-click on the icon and select connect. After the OpenVPN client connects with the server in pfSense, the GUI will once again minimize to the task tray. Now try to ping the IP address of a host on the home network’s subnet. If the ping succeeds, congratulations! You’re now securily connected through your pfSense box to your home network using OpenVPN and pre-shared private key authentication.

    OpenVPN client configuration in Linux for pre-shared private key authentication

To configure your Linux OpenVPN client in Linux for pre-shared private key authentication, copy the key file static-bob.key from /etc/openvpn/easy-rsa/keys/ to /etc/openvpn/. Then, copy the sample configuration file discussed above in the Windows section to /etc/openvpn/ (Alternatively you can use the sample configuration file included with OpenVPN for Linux. See /usr/share/doc/openvpn/examples/sample-config-files/static-home.conf). You can rename the file if desired, however, make sure to retain the *.conf extension. Now, open the file in your text editor and make the appropriate changes to the remote and secret lines as discussed above for the Windows configuration file. Remember to save your work.

Now we’re ready to make our VPN connection. Open a terminal and use the following commands, replacing client.conf with the name of your config file if you changed it:

To simplify troubleshooting, it’s best to initially start the OpenVPN client from the command line as described above. However, once you know it can reliably connect to the server, then you can start it as a daemon:

Now, open another terminal, and try to ping the IP address of a host on the home network’s subnet. If the ping succeeds, congratulations! You’re now securily connected through your pfSense box to your home network using OpenVPN and pre-shared private key authentication.

Conclusion

This concludes the post on how to configure secure remote access to your home network using pfSense and OpenVPN. Two methods are typically used by OpenVPN to authenticate the server and remote clients to one another: X.509 PKI or pre-shared private keys. The X.509 PKI method offers scalability and arguably better security, but may be overkill for those that want single-user VPN access to their home network. Conversly, the pre-shared key method does not scale well beyond one or two users, but is easier to setup and likely just fine for a small network with a limited number of remote VPN clients. For a full list of all the configuration options and other information I encourage you to visit the OpenVPN community software web site.

References

http://www.openvpn.net/index.php/open-source/documentation.html

bookmark_borderFixing Ethernet Connection Problems on the Lenovo ThinkPad T410

Earlier this year I purchased a Lenovo ThinkPad T410 laptop. Nice box. But shortly after purchasing it I began to notice that its ethernet adaptor would lose connection on a regular-yet-random basis regardless of the network I happened to be on. I dual-boot with this machine and I did not seem to be experiencing the same problem while running Ubuntu. So… I suspected the culprit might be my Windows 7 network driver. Sure enough, after trying several versions of Lenovo-supported drivers, the ultimate solution to this problem was to dump the Lenovo driver completely and download the driver for the 82577LM ethernet controller directly from Intel. Problem solved.

Note that in addition to the installing the base driver for the ethernet controller, the package will also give you the option to install Intel PROSet for Windows Device Manager, Intel Advanced Networking Services, and SNMP for Intel network adapters for Windows 7. The first two are selected for you by default. If installed, Intel’s PROSet software provides a custom device manager property page for the adaptor which has some pretty nice features, including diagnostics. Contrary to its name, the Intel Advanced Networking Services feature does not install additional Windows services, rather it installs a couple of extra tabs in the aforementioned device manager property page allowing you to setup and manage teaming and V-LAN tagging on the adaptor. The SNMP for Intel network adapters feature is simply an SNMP agent enabling you to send event notifications via SNMP (requires that the Windows SNMP service be running).

bookmark_borderIntel Core i7 Build: Overclocking the Intel DP55KG and Core i7 860

This is the third post documenting my upgrade to an Intel Core i7 Lynnfield system. In my first post I discussed the components I selected and why. I talked about assembling the system and some of the challenges I encountered in my second post, and in this final post I’ll be discussing my efforts at overclocking the Intel DP55KG motherboard and Core i7 860 processor.

Two Approaches

Intel’s new “Turbo Mode” feature is able to increase the processor multiplier value beyond its default value (21 in the case of the Core i7 860) if the processor is operating within what it considers are safe temperature parameters. For example, in Intel’s Core i7 Bloomfield architecture, processors are allowed to raise the stock multiplier value by 1 or 2 depending on the number of cores being used. Intel’s Lynnfield processors are considerably more aggressive with Turbo Mode, increasing Turbo Mode multipliers within a range of ~2-5. Essentially what this means is that when fewer processor cores are demanded by an application or process, larger multiplier values are used, thus the processor is allowed to run faster than the default multiplier would normally allow. In the case of the Core i7 860, it’s not uncommon, for example, to see it use a multiplier value of 26 in single-threaded applications, yielding a processor speed of 3.46 GHz, well above its stock speed of 2.8 GHz. While this sort of dynamic overclocking is pretty damn impressive, a question arose for me when it came time to overclock my Intel DP55KG and Core i7 860: should I attempt to overclock the system with Turbo Mode enabled, meaning I would have to consider the headroom required when higher multiplier values are used, or should I simply disable it and go with the more traditional overclocking approach? I ended up trying both approaches to see how they compared and to evaluate which would work best for me.

Regardless of which approach you use though, overclocking a Lynnfield system is pretty straight forward. Adjust the host clock frequency until the system achieves a stable CPU speed. From there, the memory multiplier can be adjusted to compensate for the change in host frequency. If desired/needed you can also adjust the CPU voltage, memory voltage, and Uncore voltage to further stabilize the system. That’s pretty much all the adjusting the architecture allows you to do.

    Turbo Mode enabled

My first attempt at overlocking the Intel DP55KG and the Core i7 860 involved raising the host clock frequency but leaving with Turbo Mode enabled. These are the BIOS settings I started with:

Performance

Host Clock Frequency Override: Manual

Performance -> Processor Overrides

CPU Voltage Override Type: Dynamic
CPU Voltage Override: Default (default)
CPU Idle State: High Performance
Intel Turbo Boost Technology: Enabled (default)

Performance -> Memory Configuration

Performance Memory Profiles: Manual – User Defined
Memory Multiplier: 12
Memory Voltage: 1.65
Uncore Voltage Override: 1.10 (default)

Performance -> Bus Overrides

All settings in this section were left at their default values.

Power

Enhanced Intel SpeedStep Tech: Enabled (default)
CPU C State: Enabled (default)

With this approach, my objective was to try to achieve the best stable overclock I could using Turbo Boost and leaving the voltage settings at thier default values. However, I did alter two voltage settings: the CPU Voltage Override Type, which I set to Dynamic, allowing the CPU to still manage its own power usuage but with higher upper limits; and the Memory Voltage, which I set to 1.65 to match the voltage input specified for my Mushkin DDR3-1600 kits. I left the RAM timings at the default SPD values of 9 9 9 24.

And the result? I was able to achieve a host clock frequency of 154 MHz before the system became unstable (stability in this case is defined as the ability for the system to run without failure using Prime95 (v25.9) Large FFT for 2-3 hours). This yielded a CPU speed of 4 GHz, assuming a Turbo Boost multiplier of 26 (154 * 26 = 4.00 GHz). I did notice, however, that the multiplier in my case generally liked to stay at 25 a large percentage of the time during idle. I suspect this was the result of the High Performance setting in BIOS that forces the system to use the higher multiplier when the operating system would otherwise be allowed to lower it.

According to CPU-Z (v1.53) The CPU voltage (VID) fluxuates between .8 and .9 at idle and core temperatures according to Speedfan (v4.40) were ~30c at idle. Given the DRAM multiplier setting of 12, the DRAM frequency weighed in at a nice 1848 MHz. Loading all four cores resulted in VID rising to 1.096 volts and core temperatures to ~63c. Using all four cores of course also resulted in the system using the default CPU multiplier value of 21 (154 * 21 = 3.23 GHz).

So, in summary, I was able to achieve ~15% overclock under load using Turbo Boost and leaving the voltage settings at thier default values.

    Turbo mode disabled

After determining the optimal overlocking settings for my Intel DP55KG and the Core i7 860 using default voltages and Turbo Mode enabled, I attempted to overclock the system with Turbo Burst disabled as well as the freedom to use higher voltage settings, if necessary, to make the system stable. These are the BIOS settings I started with:

Performance

Failsafe Watchdog: Enable (default)
Host Clock Frequency Override: Manual
Host Clock Frequency: 133

Performance -> Processor Overrides

CPU Voltage Override Type: Static
CPU Voltage Override: Default (default)
CPU Idle State: High Performance
Intel Turbo Boost Technology: Disabled

Performance -> Memory Configuration

Performance Memory Profiles: Manual – User Defined
Memory Multiplier: 10
Memory Voltage: 1.65
Uncore Voltage Override: 1.10 (default)

Performance -> Bus Overrides

All settings in this section were left at their default values.

Power

Enhanced Intel SpeedStep Tech: Disabled
CPU C State: Disabled

And the result? With Turbo Burst disabled and the latitude to increase VID and other voltage settings if necessary, I was able to achieve a host clock frequency of 170 MHz using a VID of 1.2 before the system became unstable, yielding a CPU speed of 3.5 GHz (170 * 21 = 3.57 GHz). Further increases in VID, memory or Uncore voltage did not allow for a stable system using higher clock speeds. Core temperatures rose to ~35c at idle and loading all four cores caused the core temperatures to rise to ~74c. With a the DRAM multiplier setting of 10 instead of 12, the DRAM frequency fell to 1700 MHz. Here again I left the RAM timings at the default SPD values of 9 9 9 24. I did try to run with the DRAM multiplier set at 12 but there was just no way my 1600 MHz RAM was going to run at 2040 MHz!

So, in summary, I was able to achieve ~28% overclock by shutting down Turbo Boost and raising VID to 1.2.

Comparison

Afterwards, I threw a few highly unscientific tests at both cases to see how they compared. The first involved transcoding a typical MPEG-2 DVD *iso to the h.264 high-profile format using Handbrake. There was no significant difference in time between the two methods, however both represented a nice improvement over the default settings. Turbo Boost, however, did provide a nice bump in memory bandwidth, due mostly to the ability to run at a higher DRAM multiplier value. The use of Turbo Boost also won out when running 3DMark Vantage, suggesting that the higher multipler values played a role. The game-based tests I ran were essentially useless since the particular games I had on hand to test with (BattleForge, Crysis, and X3 Terran Conflict) more strongly rely on the GPU for performance improvement and not the CPU.

Conclusion

Turbo Mode is something that should be evaluated based on your needs and the specifics of your overclock. Which one did I go with? I decided to run with Turbo Mode enabled and the lower host clock frequency. There were a couple of reasons for this choice. First, I rather like using the default voltage settings; by allowing Intel to manage the power settings, I’m able to run my system moderately faster, and in some cases a hell of a lot faster, but also a lot cooler. Second, I typically run applications that do not utilize all four cores, so a moderate overclock with Turbo Mode gives me better results than a higher-speed overclock without Turbo Mode. However, it’s good to know that as I grow to depend on more cores consistently, I can simply shutdown Turbo Boost and clock the system higher.

iceflatline