Recently somebody asked via Twitter my what the make and model is of my laptop, used for Exchange testing environments. Well, 140 characters is not a lot of space so I decided to blog about it.
Our company uses Dell laptops as a laptop standard, but other vendors might have comparable configurations. The main model type is a Precision M4700, but for lab purposes the configuration has been customized:
- CPU: Intel Core i7-3820QM @ 2,70GHz
- Memory: 32GB (4x8GB) 1600MHz DDR3
- HDD1: 256GB SSD Full Mini Card
- HDD2: 750GB 2.5” SATA 7200RPM
- Battery: Primary 9-cell 97W/HR
- Graphics: NVIDIA Quadro K1000M /w 2GB GDDR3 (it can switch with on-board graphics which helps battery life)
- Wireless: EMEA Intel Centrino Advanced-N 6205 (802.11 a/b/g/n)
- Bluetooth: Dell Wireless 380 Bluetooth
- Optical: 8X DVD+/- RW Drive Slot load
- Display: 15.6” UltraSharp FullHD Wide View Anti-Glare LED-backlit
- Base option: Smartcard Reader
- Palmrest: FIPS Fingerprint Reader
- Camera: Integrated 1MP Camera with microphone
As operating system Windows 8 Enterprise has been installed. For virtualization VMware Workstation 9 is used with a company license. Unfortunately Hyper-V cannot be used at the same time.
With this configuration, I can concurrently run 1x DC (1vCPU and 2GB RAM), 3x Exchange 2013 (each 2vCPU and 4GB RAM), 1x Lync 2013 (2vCPU and 4GB), 1x Office Web App server (2vCPU and 2GB) and some additional virtual machines (Windows 8, linux router/firewall and virtual Load Balancers). Although I must admit that I turn of one or two Exchange server when testing with Lync and/or the Office Web App server.
To save space, I made several templates which are linked clones for the actual running servers. But even then 256GB is not a lot of space, so some machines are move to the significantly slower SATA drive. You do still get some speed benefit from having the template on SSD.
It’s possible that currently Dell does not provide the exact configuration anymore, but it’ll give you a sense what is (IMHO) necessary for a very decent lab laptop. For me this laptop was indispensable for testing proof of concept installations of Exchange 2013 environments. Key is memory and SSD.
A while back, Microsoft enabled the long awaited 2-factor authentication feature for Microsoft Accounts and released a code generator for Windows Phone.
But a little know fact is that this app can also be used for the Google Account Two-factor authentication. See the screenshots below on how to do this:
Go to the right corner of you Google page and select Account.
On the left you will see some options, select Security.
When you haven’t entered a mobile phone number, you’ll have to do it now. Be sure it can receive SMS.
After requesting a code and receiving it, enter it and verify.
Optionally, you can let Google trust the current computer you are working on. This is not necessary for our goal.
Confirm enabling 2-step verification.
Now 2-step verification is configured, but not yet enabled. In the middle you can see the option for Mobile Application with the options: Android, iPhone and Blackberry. No Windows Phone. However, just choose Android.
A QR code appears. Start your Windows Phone Authenticator app, add an account (with the plus). It will request an account name and secret key, but you can scan the QR code by pressing the camera icon within the app (not the physical button on your WP phone). Enter the code which appears within the app for you Google account.
You’ll get a confirmation.
And now 2-step verification is active and works with the Windows Phone app. No need for Android, iPhone or Blackberry!
Be safe! Or in Dutch: “Hou je veilig!”
Update: The reverse is also true. You can use the Google Authenticator app for Microsoft Accounts. It's available for Android, iOS and Blackberry. Thanks to The UC Architects fellow and Exchange MVP Mahmoud Magdy for confirmation.
And if you have multiple devices, let each of them scan the same QR code. That way they each show the same code. However, you could consider this a bit less save (more devices to lose).
Not all organizations need to have every user to be mailbox-enabled, sometimes a mail-user (also referred to as mail-enabled user) with a forwarding SMTP address to an external mailbox is enough. However, it is surely possible that the requirements over time change and the mail-enabled user does need to be mailbox-enabled, making use of the calendar or perhaps even more efficient use of Lync integration.
However, converting a mail-user isn’t just changing the RecipientType of the account. First the users needs to be mail-disabled, most importantly it then looses all the configured SMTP addresses and the forwarding address. Then the user has to be mailbox enabled and all SMTP addresses that aren’t added via an Email Address Policy have to be manually added. Optionally, one can configure the mailbox to be forwarding to the external SMTP address.
To make this process somewhat more manageable, I created a script that converts a mail-user to mailbox-user. It keeps all configured SMTP addresses, when they correspond with an accepted domain (otherwise it will be discarded). The exception is the configured External SMTP address, it is optional to keep forwarding mail.
The syntax is depicted below:
Convert-MailUser –Identity <UserIdParameter> [-KeepForwarding]
The mail user will be mail disabled without a need for confirmation. The parameter -Identity is mandatory and a string. Accepted formats are:
- User Principal Name
- Display Name
- Distinguished Name (DN)
The switch [–KeepForwarding] is optional. This switch will retain the SMTP Forwarding address from the mail-user and will add it as an ForwardingSMTPAddress, with mail being forwarded to that address and sent to the Exchange Mailbox. No additional value (like $true/$false etc.) is required.
Please note that the ForwardingSMTPAddress value does not show up in the Exchange Admin Center view at the moment (Exchange 2013RTM CU1). You will have to use the Exchange Management Shell (Get-Mailbox|fl) to check whether the Mailbox is forwarding mail to an external address.
You can download this script from the TechNet Gallery.
Note: This script has been tested on Exchange 2013 on Windows Server 2012, but will probably work on 2010 and 2007 and Windows 2008 R2. Use at your own risk and the script is provided as-is.
Recently the backend of my Office 365 P1 account was upgraded to the Wave 15 series of products, which obviously includes Exchange 2013. But going through the settings of the Exchange Admin Center, I noticed something that made me curious.
When you go to the Exchange Admin Center, click to Recipients>Mailboxes and select a user, you can see in the Mobile Devices section the option to disable the Exchange App below the option to disable Exchange ActiveSync. I’ve highlighted it in the screenshot on the left.
This is probably the same thing what was previously named the Outlook App in the Exchange 2013 Preview version. Check my previous blog post on exactly this topic.
Some interesting observations: the rename from Outlook app to Exchange app. Is this to distinguish between this app and the Office 2013 Outlook that may or may not become available for Windows RT tablets?
The option to disable the Exchange App is separate from Exchange ActiveSync is interesting. Does this mean the Exchange App does not use the ActiveSync protocol and uses for instance Exchange Web Services (EWS)? That could mean that mobile devices with the Exchange app can have a lot more features compared to only ActiveSync, which sadly hasn’t been enhanced in this most recent release of Exchange as you can read in another blog post of mine. Or will it just be a special ActiveSync “device” which may overrule disabled ActiveSync?
I’ve checked whether these options were present in Exchange 2013 Cumulative Update 1 (CU1), but this isn’t the case. That suggests that these options will be available at the earliest in CU2 and thus at the end of Q2 following the new servicing plan for Exchange. I would expect the Exchange App would be released around the time these options become general available in Exchange with a CU (or Service Pack?). And hopefully for a lot of different Mobile OSs.
But practically all of this is speculation, we will have to wait and see. An announcement or perhaps even release during TechEd 2013 North America is somewhat logical, seeing the timeframe and it being a big event (plus some wishful thinking on my part, as I am attending this event ).
Now that 2012 is in it's final hours, I wanted to look back professionally but also personally. It's the human thing to do (Fellow UC Architects Michel de Rooij and Micheal van Hoorenbeeck also made a retrospective)
What were the interesting technical events that happened in 2012?
Awesome events, especially MEC (fond memories!) and the start of the UC Architects podcast. These both helped me grow professionally. Interaction with peers I highly regard is something I value greatly. Both work-related highlights of the year.
And what about the GA of Exchange 2013? Most of the availability of new versions of Exchange, Lync etc. will have more impact on 2013 than on 2012. For instance there is still no word on Exchange 2010 SP3 and the 2007 Rollup update, both necessary for a co-existence scenario (aka transition).
Starting this year I had some personal goals, one of them posting one blog post a week on average, including a multipart post. And preparing for Exchange Master.
I've reached my blogging goal, with room to spare! That's 53 blog posts (54 with this one), short and long. Almost more posts in one year than I've posted since I started blogging in 2008. It might seem as if in setting this goal I was going for quantity rather than quality, but that was not my intent. My intent was for me to learn how to blog regularly and keep it fun. I didn't want it to become a chore. Some posts worked better than others and I'm more proud of some than of others, but with every post I've tried to communicate something interesting. I hope I've succeeded.
Some of the most popular post of 2012 were:
I've learned something new with almost every post. Working on a multipart post was a good learning experience, especially on the planning side. I'm not sure whether I want to keep up the same frequency next year, it is something that takes quite a bit of time, time which is a valuable commodity. Especially regarding my other goal:
Preparing for Exchange Master. This was a though one. I managed to get my certifications in order for the 2010 rotations, but psychologically I was still intimidated by it. Only by the end of the year did I start feeling ready for it. There won't be any 2010 rotations anymore, so now I've been preparing for the 2013 program. Goal not really reached, but as it is something you shouldn't take on lightly I'm not that sad about it. However, this will be an important goal in 2013!
One other important announcement was that my company decided to migrate from Zarafa Groupware to Exchange Server 2013. I will be the architect on this migration and it will be an interesting experience. I hope to blog extensively about my experiences during this migration.
I want to thank everyone who read, commented, retweeted my tweets and blogs, helped me with technical issues and/or questions and provided me with ideas for blog posts. But a special thanks goes to my wife, who supported and supports me and for sacrificing time with me. <3
Here's to an awesome 2013!
Episode 14 is now available for download! It is hosted by Steve Goodman and co-hosts were John Cook, Serkan Varoglu, Johan Veldhuis and Stale Hansen.
The newest UC Architects episode is now available from iTunes and the Zune store and via www.theucarchitects.com. Previous episodes are also available from the same locations.
Next to Content Switching (which I recently wrote a post about), Citrix Netscalers can also do URL Rewrites. This enables us to simplify the OWA URL.
First, be sure the Rewriting option is enabled by going into System, then Settings and choose Configure Basic Settings. Check the tick box for Rewrite
After this, first make an Rewrite Action by going to Rewrite>Actions and add an Action. Give it a comprehensive name and set the type to REPLACE. In the Expression the following should be used:
In the String expression for replacement text, the following value should be used:
Be sure to type it in and not copy it from this blog, otherwise it could not workd correctly. The screenshot below shows the value as mentioned before. Click Create to create the Rewrite Action and click Close to close the window.
Now you can create a Rewrite Policy by going to Rewrite>Policies and then click add…
Again, give it a sensible name and be sure the Action is set to the earlier created Rewrite Action (in the screenshot below Rewrite_Action_OWA).
For the Expression, use the following:
Again, type it and do not copy and paste. Finally, press Create and Close. This Rewrite Policy now checks for URL's which use the root path / and will replace it with /owa/.
But in order to make it happen, the policy has to be enabled somewhere. In this case I bind it to a Load Balancing Virtual Server already previously made (see this blog post). This has to be the Virtual Server which is responsible for (at least) Outlook Web Access.
Open the Virtual Server, go to the Policies Tab and press the Rewrite (request) button. Right-click in the window and choose Insert Policy. Choose the previously made Rewrite Policy as shown below:
And voila! Now every user entering https://webmail.contoso.com/ will be directed to https://webmail.contoso.com/owa/ without a fuss! And because the policy triggers only on the root, directly using /owa, or /ecp for that matter, will also work.
How about HTTP to HTTPS redirection?
That is not done via Rewrites, but there are more ways than one. Make a Load Balancing Virtual Server, listening on port 80 and as IP address the Virtual IP used for OWA. You do NOT check any services. Instead go to the Advanced Tab and in the Redirect URL enter HTTPS:// with the virtual IP used for Webmail. Press Create and close. Do remember to enable traffic over TCP port 80 towards the Netscaler, otherwise this won’t work. This is also described in the Netscaler Deployment guide and depicted in the image below:
If you are also using Content Switching, you can also make a Content Switching Virtual Server accepting traffic on port 80 and again using the OWA Virtual IP. As a target the Load Balancing Virtual Server using port 443 should be used (that can be used multiple times as a target. Description how it was made in this blog post). This is shown in the image below:
You should make duplicate Content Switching policies, as they can only be used once. The Expression however, is exactly the same as the Content Switching Policy used in the Content Switching Virtual Server using SSL.
Now every user will be directed to the correct URL, whether they use http://webmail.contoso.com, https://webmail.contoso.com/ or http://webmail.contoso.com/owa/ .
Next to F5, KEMP technologies and a lot of other network load balancing vendors there’s also Citrix with it’s Netscaler brand. Especially when an environment also has Citrix servers, it could mean that well scaled Netscaler devices are present and can also be used for other purposes next to Citrix Secure Gateway access. Obviously Exchange 2010 comes to mind.
Citrix already has a very helpful Netscaler Exchange 2010 deployment guide (PDF warning). But unfortunately that guide is not always something one can implement exactly. For instance, in the guide Citrix uses an unique IP address for each separate protocol, which is not always possible if these are limited.
However, all or most Netscalers also provide Content Switching and with this you only have to use one IP but also have optimized settings for persistence/affinity and time-out for all protocols using the same TCP port (HTTPS). For some background information around persistence for Exchange 2010, check this article.
First create the services as described in the Citrix Deployment guide. You make one per physical server for each specific service, like HTTP (Load Balancing>Services>Add>):
When that is done you can create a Virtual Server for each different protocol, meaning OWA, ActiveSync, OAB, EWS etc. (Load Balancing>Virtual Servers>Add>). In this example, the OWA Service is shown with the specific Load Balancing method and persistence options (note that COOKIEINSERT requires SSL Offloading).
But instead of entering an IP address, keep it emtpy and untick the “Directly Addressable” box.
Now you have to make sure Content Switching is enabled on you Netscaler. You can do that via System>Settings>Configure Basic Settings> Enable Content Switching.
After this you can create Content Switching (CS) Policies via Content switching>Policies>Add…. For OWA I would check whether the specific hostname is requested in the HTTP request: HTTP.REQ.HOSTNAME.CONTAINS("webmail.contoso.com")
You can build it with the expression builder via Configure… button and build the expression from there.
When you’ve made the CS Policies, you can now make Content Switching servers via Content switching>Virtual Servers>Add…
Now you can add the IP address the Netscaler has to respond to. This is also the Virtual IP (VIP) address you have to point your FQDN for OWA and other protocols towards.
In the CSW field (open per default), right click and choose “Insert Policy”. A drop down menu appears (as shown above), and every available CS policy is visible. Note that a policy can only be used once.
In this case the previously made webmail.contoso.com policy is selected. Now select the target field and the different Load Balancing Virtual Servers are listed, in this case only VIP_Exchange_OWA.
Select it and choose Yes in the corresponding question box,
Now every HTTP request on IP 172.16.0.205 with FQDN webmail.contoso.com will be directed to use the Load Balancing Virtual Service which uses two Client Access Servers previously defined as valid services.
If you want to make another Load Balancing services for other protocols with other persistence timeout values, but with the same VIP, make another Contents Switching Policy and add it to the same Content Switching Virtual Server. However, you will have to point them to other Load Balancing targets, namely those with the optimal settings.
For Autodiscover use the expression:
For ActiveSync use the expression:
HTTP.REQ.HOSTNAME.CONTAINS("webmail.contoso.com") && HTTP.REQ.URL.PATH.TO_LOWER.STARTSWITH("/microsoft-server-activesync")
For EWS, OAB and Outlook Anywhere you can change the ActiveSync expression with the URL Paths /ews, /oab and /rpc. If you don’t specify these specifically, they would just use the OWA Content Switching policy (as it is agnostic about the path in this case) and thus the same persistent values as those specified for OWA. I found that it is sufficient most times.
Insert every CS Policy in the CS Virtual Server, and order them in the correct sequence. Note that Netscalers checks policies with a lower priority value first and works up to higher values (first 10 and then 100). The protocols which would trigger with specific paths in it should come first, otherwise they would be triggered by our first policy and will not get the optimized load balancing rules.
In the above example you can see the generic webmail.contoso.com policy has an OWA target and a priority of 100. Subsequent policies are ActiveSync (EAS), Autodiscover and Offline Address Book (OAB) each with a corresponding target and persistence settings.
After implementation you can check whether the rules are (correctly) being used by watching the Hits column.
So with Netscaler Content Switching you are able to still optimize persistence settings per protocol and still use one Virtual IP address for each HTTPS service.
For these screenshots I’ve used the Citrix Netscaler Free trail virtual appliance which can be downloaded from www.citrix.com. Note that for some of these settings you’ll also need SSL Offloading. The specific configuration and certificate selection (in the Content Switching Virtual Server for instance) is not shown.
In the previous part one and two of this series, I’ve discussed using PSTs, Exchange Personal (on-premises) and On-line Archiving as well as third party solutions. In this last post I will discuss the use of Retention Policies and Mailbox quota’s in order to manage storage usage. As a bonus I will shortly discuss improvements in Exchange/Outlook 2013.
The basic Messaging Records Management functionality behind Retention Policies isn’t actually that new. In Exchange 2000 and 2003 you could configure Recipient Policies and in Exchange 2007 you had Managed folder Content Settings.
All of them regulate the retention of mail items (and since SP2RU4 Exchange 2010 also managed calendar and tasks items) of the complete mailbox or certain specific folders within a users mailbox. You can delete it with recovery, delete it without recovery or move it to the Archive Mailbox (if the users has one). For instance, a 90 day old mail in Deleted Items or even Sent Items could have lost it’s worth and the cost of keeping it in the mailbox too high, but the user could keep forgetting to clean up or the mailbox is shared and hasn’t got a main user which keeps it manually neat and clean. As it is processed server side (on the Mailbox role), the effect is client independent.
In Exchange 2010 you can give users the option of tagging specific (sub) folders and mail items, so that these objects will have another retention than the default setting. You can allow users set No Archive/No Delete tags or increase (or lowering) the retention period of the item (via a Personal Tag). But the admin still has control on which tags are included in a Retention Policy which is in it’s turn appointed to a mailbox and default folders aren't configurable for users. However, usage of (non default) Personal Tags in a policy requires an Exchange Enterprise CAL, other cases only an Exchange Standard CAL is required.
Personally I use them for certain mailing lists, like all my LinkedIn notification mails. Their use expires quickly (because it’s just a notification, thus I've changed the folder retention period from default (never delete) to 30 days after which the mails are deleted.
A very helpful tool, which can benefit admins a lot because it lowers resources. But it is also helpful users by keeping some folders lean and mean, which reduces the risk of reaching quota limitations and it helps keeping only items that really matter.
Combined with the Personal Archive or Online Archive, an admin or an user can control when items are moved to the archive mailbox rather than just deleted.
Too bad the policies only work on retention and not on other criteria like Categories, and that it processes the whole mail item and not just the attachment for instance.
- Admin has control and is able to give users (some) control
- Actions (such as deletions) are performed automatically on the Mailbox server, no client side rules thus also valid for other clients than Outlook
- Different policies with different Policy Tags can be implemented on a per mailbox basis
- No added license cost for default settings, it is included in the Standard Exchange CAL
- Can be combined with Archive Mailbox
- Only on retention, no specific rules on mail items with attachments for instance.
- Users need to be instructed about the admin settings in order to prevent accidental deletions
- Has a bit of a learning curve
- Mailboxes with customized retention polices with Personal Tags, require an Exchange Enterprise CAL
And last but not least, Mailbox Quota’s. These are settings on a database or mailbox level (which overrides the database setting) and entail a warning, prevent sending and ultimately prevent sending & receiving mail threshold. It is actually on of the things you use to correctly size your mailbox server role.
But how does this help you? Well, even if you have sized your server by the book, it doesn’t mean your users will adhere to your expectations and sometimes faulty clients or other reasons can overflow a mailbox. In extreme cases it could use all available disk space and cause Exchange to dismount the database. Which leads to unhappy users.
Usually I tend to configure the quotas on the database level and have several databases (maximum of five on Exchange 2010 Standard) with different quota levels. This is an easy way to make it easy on administrators or even your service desk to quickly raise someone’s quota by simply moving the mailbox to another database (which isn’t much of a problem anymore with Exchange 2010 as the mailbox is only shortly locked at the end of the move).
With Exchange standard you can have up to five databases, so you can have five different quota settings. Four if you still need a Public Folder database. I tend to call this Mailbox Quota Tiering. It is a bit more tricky to project each DB’s maximum size, so capacity management in one form or another will be important. Furthermore, you’ll need management backing for the different quota settings and a clear process for moving users from one quota tier to another.
However, if you have a Database Availability Group and several Mailbox servers this could result in an less than optimal distribution of databases. Therefore in those cases I revert to specific mailbox quota’s per mailbox, when the database default (the same on all DB’s) isn’t sufficient. Management is more cumbersome, using scripts is probably a good way to reduce this.
In my experience having a Mailbox Quota Tiering system offers you and/or management a tool for controlling quota’s and thus storage use. I’ve seen too much issues rise from suddenly imposed quota’s and/or clean up requests due to rapidly shrining free storage space. Having several quota’s also offers users an alternative than immediate cleaning, which is more service oriented. This could be even more important than the technical benefits.
- When properly sized, quota’s help preventing storage filling up due to an issue or normal growth
- When using DB specific quota’s, one only needs to remember to place the mailbox in the correct DB
- Having a clear quota policy in place helps prevent unpleasant surprises for users, management and admins
- When using mailbox specific quota’s, additional administrative effort is required
- It’s no guarantee storage won’t fill up, storage space monitoring is still required
- You’ll need backing from management for the specific quota settings and a process in place for moving users from one tier into another
Since I planned this series of posts, Outlook 2013 has been released. One feature that could be helpful is the Sync slider or the OST slider. As Exchange 2013 raised the supported mailbox size from 25GB in Exchange 2010 to 100GB an issue can occur when the computer with Outlook 2013 in Cached mode does not have the space required to store this amount of data. Especially laptops and slates with SSD prefer speed over storage space. However, it does not manage the amount of storage needed on the Exchange server but I felt it was worth mentioning as it does have effect on local computers storage.
The OST slider (see image) is a way to limit the amount of data stored on the local drive by only downloading the last 2 or x amount of months. You can give users the control over it or configure it via Office 2013 Group Policies. When an item isn’t stored within the OST, Outlook needs a connection to the Exchange server. You could say it is comparable with the Personal Archive functionality, however you do not need an Exchange Enterprise CAL for this and you can differentiate the OST Slider settings per computer. You do need an Office 2013 license obviously.
This feature can be another approach to limit the amount of data stored locally and thus can be a competitor of the Personal Archive. Especially when you have Office licenses with Software Assurance the costs are possibly less than when you have to purchase said CAL. If needed you could combine it, but the limitations of the Personal Archive are then still present.
- Works with Exchange 2007, 2010 and 2013
- User control or admin control on OST slider
- Data can be stored in a single mailbox, no need for Archive
- Needs Office 2013 (with Outlook)
- Only time based
- Only tackles the amount of storage needed on client computers, not on the Exchange server
Well, we discussed PSTs, Personal/Online Archive mailboxes, Third Party Archiving solutions, Retention Policies, Mailbox quota’s and the new Outlook 2013 OST Slider. As you can see there are several approaches to manage the amount of storage necessary for Exchange and Outlook. Except for perhaps PSTs (“Kill it with fire!”), there isn’t a complete answer.
As an admin and organization, you still have to decide which technology suits your needs and wants the best. It could be just one solution or a combination of some or all of them. I hope I gave you some pointers that will make it more easy to decide which is the best fit for you.
This concludes this series of blog posts on managing mailbox storage with Exchange 2010. Please note that some techniques are valid for other versions of Exchange, but my main focus was Exchange 2010.
Managing mailbox storage use in Exchange 2010, Part 1
Managing mailbox storage use in Exchange 2010, Part 2
Episode 13 is available for download for a while now! It is hosted by Pat Richard and co-hosts were John Cook, Tom Arbuthnot, Justin Morris and I rambled some lines in there myself .
Topics include news stories about the Surface Pro, the new improved Exchange/Lync Connectivity Analyzer website, the revoked Exchange 2010SP2 RU5 update and a lot more.
As a special guest we had Rick Kingslan, Microsoft Senior Technical Writer for Lync. We asked him some questions like: How are support calls handled, what supportability means, what it takes to be a Technical Writer etc.. A lot of useful and insightful answers!
The newest UC Architects episode is now available from iTunes and the Zune store and via www.theucarchitects.com. Previous episodes are also available from the same locations.
This blog post is something I intended to write for a while now, because it is a question that i get asked a lot. On which Exchange server roles do you need to install the Exchange malware protection software, be it the now no longer for sale Forefront Protection for Exchange or similar products from McAfee, Symantec or GFI and the like.
Why is this IMHO a valid question? Well, if we ignore the Microsoft recommendation to install multi-role servers (meaning the CAS, Hub Transport and Mailbox Server roles), you can take benefit of not needing to install the malware protection software on all servers when it has no or little benefit. Note that I mean Exchange malware protection, I do not mean the file-access server protection. Let's go over the specific Exchange 2007 and 2010 roles:
Client Access Server
On this server there is no mail flow and there are no databases present. In this case no malware protection is necessary or even useful. It only handles client protocols and none of them are scanned by Exchange aware solutions, that I know of.
Hub Transport Server
This server handles all mail routing. Obviously incoming external mail does need malware scanning, so when this server is directly connected to the internet and receives not previously scanned mail, I normally would install a solution on this server role. Even mail from one mailbox to another in the same organization or even the same mailbox database is transported through this role. So, if an user is mailing an infected attachment to a coworker, it should be quarantined or cleaned. Everything that is transported can or will be scanned by your malware protection on the Hub Transport server.
You could however choose to not use on-premises scanning, if you use an Exchange Edge Transport server with malware protection or if you use cloud malware protection such as Forefront Online Protection for Exchange (FOPE), recently renamed Exchange Online Protection (EOP). These vendors generally have very good cleaning ratio and it lowers the load on your administration. Another option is to use on-premises appliances that are your first entry point for SMTP traffic before entering your Exchange Organization.
Mailbox Role Server
This is a tricky one. As said, all mail transported can be scanned via the Hub Transport server. If you have such protection, you could dispense of scanning this role as most infectious malware is received via external mail. But there are cases that an infected mail could end up in a mailbox and thus the mailbox database. A user sends an infected mail to a coworker or to the outside, the recipient does not receive it as it is filtered by the Hub Transport server. However, the mail is already saved in the Sent Items folder of his/her mailbox. With an infected attachment... The same is for writing a mail with an infected attachment and save it as an draft. Again the Hub transport does not get to scan this message and the message will reside in the mailbox database, unless the user or admin deletes this manually.
The only automatic way to get rid of these malicious mails is to do a real-time or regular database scan, which costs server resources especially with real-time scanning. I do not know of any confirmed cases that an Exchange server got infected by infected mail in the mailbox database (or Public Database for that matter). Because of that I feel that it is safe to say that the computers that are at real risk are client computers (or devices). You could argue that these computers are possibly already infected, because how could it allow an infected file to be uploaded? If so, the risk of infection of the client computer is 0% as it probably already is infected. Other client computers (used by the same user with the same mailbox) should be protected by their own virus scanner (perhaps with additional protection via Network Access Protection, NAP), but if this is a risk you are not willing to take you should implement a Exchange malware protection layer on the mailbox server role. But consider that when you have protected the mail flow and all clients, this risk possibly doesn't outweigh the extra cost in resources (IOPS, Memory, software licenses etc.).
If you need as close to 100% protection, you should implement a mailbox role solution. And having said that, consider that sometimes mail does not always origin from clients or via SMTP, but a cross-forest, platform etc. migration could bypass SMTP. In this case, mail (probably) does not get filtered before it is put in your Exchange organization and the only way to filter malware is to scan the databases. You could use a pre-staging Exchange server, a dedicated Exchange server with malware protection that scans all migrated mailboxes. It would clean mail before you move mailboxes to your production environment mailbox servers, which perhaps don't have mailbox server protection. But that is added complexity.
Now note that I'm not advocating the absence of malware protection, but I did want to make an overview of choices one perhaps has to make when (financial) resources are limited or even just to clarify a bit about malware protection in Exchange 2007 & 2010. I hope it helps with your design choices.
To summarize: mail flow should always be protected on-premises or via the cloud with installation on the Hub Transport server has the best change to catch malware in most thinkable scenarios, scanning mailbox database servers is probably less effective but should be done when the highest form of security is required and the loss of resources is acceptable and incorporated in your design.
Exchange 2013 has a changed infrastructure with less roles, no VSAPI on which malware protection suites can latch on and already has a built-in malware scanning module. This is so different and new, that will probably warrant a blog post on it's own.
If you have a different opinion or flat out disagree with me, feel free to leave comments!
I already knew this for several weeks, but wasn’t allowed to discuss this publicly yet. But as some know the company I work for, OGD ict-diensten, is using Zarafa as its groupware solution and recently decided to migrate to Microsoft Exchange 2013!
A lot of the Exchange experts I met during the Microsoft Exchange Conference and some outside that event asked my why we were using Zarafa and how that works. Especially when they hear we also deployed Lync 2010.
We have to go back for about six years. At that time we only had a Linux solution (IMP), primarily for mail as the calendar functionality was basically non existent. So, we made a business case for a new product. At the time we wanted to give every employee a mailbox. For those who don’t know, at the time we had about 250-300 full time employees and about 300-400 part timers. Those part timers were mostly college/university students who worked for us a day or two in the week at a service desk or other less specialized IT work. It is kind of a unique business model, but it has worked for us very well over the years (and still does).
So, price was an issue (calculating with about 1000 users) and we got a very good deal at the time. But there where technical arguments. At the time the short list was Zarafa 5 (with 6.x at the horizon) and Exchange 2007. Despite the fact we were Microsoft Partners (along a lot of other company partnerships), we knew we had a lot of *nix users. Furthermore, the bulk of users wouldn’t have Outlook on their work PC’s. Therefor webmail was a massively important way of contacting the groupware solution. And I am sorry to say, at the time Zarafa was ahead of Exchange 2007 in multiple browser support and some much needed web features. Thus, Zarafa won the business case.
However, times have changed. The company has changed and our requirements have changed. I already said we implemented Lync 2010, we found out that a lot of students didn’t use their mailbox (forwarding mail to a personal account) and in my personal opinion Zarafa couldn’t keep up with Exchange on several areas, must of all featurewise (Outlook 2010 and 2013 compatibility) and interoperability (with Lync for instance, no EWS…). This time Exchange won the business case.
And now Exchange!
As we didn’t have any legacy Exchange deployment, we were free to choose any version of Exchange including 2013. As we also adopted a eat-your-own-dog-food principle (use what you sell) and we wanted to be at the forefront of Microsoft technology, we chose to deploy Exchange 2013.
But Exchange 2013 is not the only Wave 15 product we are going to implement, as said we also have Lync 2010 and we do have a SharePoint 2010 implementation (for some departments). All of those will be upgraded to the Wave 15 releases and eventually we are going to use all of the functionality of Exchange and Lync 2013. This includes the (at least in the Netherlands) underused Unified Messaging functionality in Exchange.
I will be the solution architect responsible for the Exchange implementation and migration (working in a team with Lync and SharePoint specialists) and I expect this to be one of the most challenging assignments I ever got. Mostly because the user group are trained and highly critical IT Pros. Most of them have been critical on our Zarafa implementation and (on occasion lack of) functionality. Such as I have been very critical of Zarafa (publicly and privately), I expect the same level of commitment and thus expectation of quality. And rightly so, IMHO.
This will be one of the first Exchange 2013 production deployments I will perform and I will try to share my experiences in the coming weeks/months on this blog and via Twitter. So stay tuned!
Recently Microsoft released an updated version of the Microsoft Exchange Remote Connectivity Analyzer rebranding it as the Microsoft Remote Connectivity Analyzer as it now also can test Lync connections, next to Exchange and Office365.
Another addition is the Microsoft Connectivity Analyzer Tool, a local variant of the online tool for Exchange connections. This is a very helpful tool for internal testing and despite being in beta, it already successfully helped me correctly identifying a certificate issue before I used it in production.
As it is a local tool, it uses your local DNS and network routing. As said this is helpful for local troubleshooting as sometimes the connectivity is different from (for instance public Wi-Fi networks) than the external connection. But another effect of this that you can use it in lab and testing environments before making changes final. In my case we did not yet change external DNS A-records to point to the new Exchange 2010 datacenter, everything else was already set up for publishing. By changing my local host file I could test the new datacenter with the correct domain name as if it were in production.
I already detected an issue with the Outlook AutoDiscover process, and however the Auto configuration tool in Outlook is helpful (On Outlook Tray Icon, CTRL+Right click mouse and select “Test E-mail Auto Configuration”), it does not give detailed information on errors in the way the Remote Connectivity Tool gives. Below is an example of this test, with a successful result.
So, how do you use the tool? Go to the connectivity site and select the Client (beta) tab, depicted in the first image of this post.
It will verify Application requirements, if the tool isn’t installed yet a Security Warning appears:
You can click install if you want to install the tool. Now it will check for prerequisites. You will have to install prerequisite .Net Framework 4.5.
As I think this installation broke some .Net applications (MetroTwit) I would advise to install this on a testing computer and not on an Exchange server! That also makes it more easy to change host files and other things.
After installing .Net Framework 4.5 and rebooting your computer you can click the link on the Connectivity site again. If you are using Chrome or Firefox, you may need to install additional extensions. For Chrome (my main browser) you need to install ClickOnce for Chrome”. Firefox needs the Microsoft .Net Framework Assistant for Firefox.
After everything is set, the tool starts by using the links on the Connectivity site Client tab, ultimately the following screen popups:
For my issue, I used the “I can’t log on with Office Outlook” option and entered the account information as I would have on the site. Luckily no Captcha .
After all the tests are run, you get the result screen. You can save it as an HTML file and/or review the results of the tests.
The HTML file is an awesome addition, as you can now easily send the results to a coworker for further examination. Below is an excerpt from such an HTML file. With the same formatting as the website and the tool. Note the red square, as you can see it has a non public IP address. Proving that this tool could be invaluable for local testing of Exchange connectivity.
Thanks to this tool I discovered that the AutoDiscover process failed at a later point than I first anticipated and that the certificate is loaded but failed to validate for the AutoDiscover domain name. As it turned out there is probably an issue with this specific certificate. Something that I previously probably only would have found out if I put this certificate into production and with a broken AutoDiscover process as a consequence.
Even if the tool is beta, it already helped me prevent unnecessary service disruption!
In the previous part (yes, I know. It’s from May… I’m very ashamed) I discussed using PSTs or the built-in Personal Archive (also referred to as Online Archive).
In this post we will discuss third party archiving solutions and Exchange Online Archiving which in this case is service part of Office 365, although you can mix it with an on premises solution. (see the possible confusion)
Third Party Archiving
As it seems, organizations had long ago issues with overflowing Exchange stores and storage. In that niche, numerous third parties like Metalogix Archive Manager, Symantec Enterprise Vault or GFI MailArchiver, filled the need. However, there are two ways at looking at archiving; storage or compliancy focused (or in between).
Third Party archiving advertise with storage management. That is indeed something they can help in and was at first their main focus, understandable as storage was expensive but email was not regarded the same importance as it is now. In my view their current best strong suit on archiving has now changed to compliancy regulations. Especially since SOX etc.. these solutions are sometimes indispensible.
Some customers automatically propose a third party archiving solution, because they have a lot of data etc. etc.. But I am skeptical the moment this happens. Exchange 2010 isn't 2003 anymore and reasons to implements such a solution in the past could be no longer valid. To summarize, Archiving solutions may have started with storage maintenance but due to changing external factors and architectural changes in Exchange it’s focus has shifted towards compliancy. An important thing to realize, as it could affect your choice of storage management.
Most solutions require one dedicated server for the server application and storage of archived items. For metadata most times a SQL server is required. This could mean that instead of data reduction one is just shifting data from one place to another. That could be enough due to storage tiering etc.., but the extra overhead can not be discounted. Yes, if you have a DAG with multiple copies, this would reduce that amount. But you loose the same amount of redundancy with it, unless you take action for the archive and thus still have to account for at least twice the amount of storage the archive. Even if you only need compliancy, you will have to address these issues (some compliancy regulations specify a need to keep all mails for at least several years).
Furthermore, you have to take extra care backing up the archiving servers keeping it in sync. Nowadays this seems to be handled a lot better, but these solution always add an extra complexity to you environment. You have to decide whether this is worth it.
And last: these solutions are not for free and there are always additional licensing costs, most of the time per mailbox and or per specific feature. Also, don’t forget the Windows Server licenses and in some cases Microsoft SQL server licenses.
- Flexible rules regarding archiving and compliancy (time, size based, only attachments etc. etc.)
- Most of the times also compliancy built-in
- Data stored outside the Exchange environment, sometimes with Single Instance Storage limiting even more storage space
- Still need storage for archived mail, although less when there is no need for redundancy
- Most of the time additional servers needed (for instance: File Storage, meta database, the service itself) and thus higher complexity
- Additional administrative effort for maintaining archiving solution
- Additional licensing fees (most solutions work per mailbox)
Third party archiving solutions can be helpful and are more flexible with archiving rules than Exchange own rules which are only based on the age of the item, but consider the investment you have to make especially if you don’t need it for compliancy reasons.
Exchange Online Archiving
The Exchange Online Archiving is basically the same as the Exchange Local Online Archive functionality mentioned in part 1 of this series. The difference here is that the Archive Mailbox is now located in Exchange Online. For this to work you do have to setup DirSync and Federation Services, just as an Hybrid Exchange environment. In fact, it is an hybrid environment but you don’t have to put main mailboxes in Exchange Online.
The upside is that you pretty much don't have to worry about archive storage because you don't host it. Availability, capacity etc. are all cared for by Office 365/Microsoft. Users don't even have to notice that their archive is not on-premises. You do have to pay a fee per mailbox, but the costs are predictable and it could very well be cheaper per GB than having it on you own storage solution.
You do need to account for extra administrative effort due to the Federation and DirSync services, although probably less than an Exchange server. If they are down, nobody can access their Archive Mailbox as they normally would, so monitoring and maintenance is required (although perhaps at a lower degree if Archive mailboxes are deemed less important). Another downside could be when the internet connection fails. Users can access their on-premises mailbox but not their Online Archive Mailbox.
Depending were your company is based or what kind of industry your company is operating, it could very well be that certain privacy or security guidelines or even laws limit or prevent you from using this solution. Especially in Europe (semi) governments aren’t always allowed to store information off-premise or on servers that could potentially be outside the country (not even if it’s an fellow EU country). But even if you don’t have these legal prohibitions, there are still a lot of companies that don’t like to store their data (al be it older mails) off-premises with another company. Keep that in mind.
- No local storage of archived data
- Can be cheaper per GB than on-premises storage
- No maintenance of archived data
- Need for additional (Federation Services and DirSync) servers
- Need for internet connection when access to Archive Mailbox is needed
- Additional administrative effort on maintaining the Hybrid configurations
- Additional fees per mailbox per month (however they are predictable)
- Data is stored off premises which could have legal ramifications
- The no control over your data could be a psychological barrier for some
Exchange Online Archive can be an easy solution depending on your situation, however most of the times legal and psychological barriers prevent you using this option.
This concludes part 2 of this series. The next and final part will discuss retention policies, mailbox quota’s and little bit about Outlook 2013.
Managing mailbox storage use in Exchange 2010, Part 1
Managing mailbox storage use in Exchange 2010, Part 3
A few hours back the Microsoft Exchange team published a blog post on how to connect and support your Windows 8 Mail app with Exchange. It has a lot of good info on which Exchange ActiveSync (EAS) policy settings are supported and how the App reacts.
Most of those things I already discovered in earlier blog posts of mine, using the Windows 8 Consumer Preview. It seem as if there are no massive changes since then. But I did dig somewhat deeper into some specific features. So, check my two blog posts out as well:
Yes, there is ActiveSync in Windows 8!
More about Windows 8 CP and ActiveSync