MVC: Taking the Web By Storm

Author: Matthew Grigajtis 10-27-2011

MVC has become a very popular abbreviation lately, especially among web developers – and for a very good reason. MVC stands for Model View and Controller, and it is basically standardizing modern web application across platforms and languages. The MVC architecture uses what is called the DRY (Don’t Repeat Yourself) philosophy, and has succeeded in taking an object oriented approach in making web applications development elegant and powerful while being easy for multiple developers to maintain.

The Model of the MVC architecture is basically the object. For example, if you were building an application that kept a record of employees in a database the Employee would be the Model. The Model is also the portion that is intended to work directly with the database.

The View portion of the MVC architecture is what the End User sees. Many open source developers would think of this as the template, while Old School Microsoft ASP.NET developers would think of this as the Master Form. All the presentation and JavaScript is contained within the view and it is not supposed to contain any business logic.

The Controller portion of the architecture is the glue that holds it all together.  The controller is what interacts between the Model and the View. The controller will often contain business logic, call functions that interact with database from the model, process the forms, and handle all of the redirections.

An MVC architecture is available in virtually all platforms and languages.  Microsoft is up to version 3 of their MVC architecture and it is available for developers to use right now in MS Visual Studio, supporting both C# and VB in .NET.  There are also a myriad of Open Source MVC architectures available for free that run on *nix systems.  Ruby on Rails has gained a great deal of popularity in the recent years. Django is another elder MVC web application framework written in Python that has also had a cult following. CakePHP and Symfony are two of the more popular PHP MVC frameworks that many PHP developers prefer to use in modern application development.

While there are many legacy applications out there that probably have several years left in their lives, applications using the MVC framework will quickly become the norm and a standard skill that employers will be seeking.


The Subjective Internet

Author: Steven Raines  10/19/2011

When we choose which web sites, blogs, and newspapers to read and which television stations to watch, we are knowingly filtering the information we receive. But what we may not realize is the the explosion of personalization on the internet is doing the same thing. But how personalized are things really? More than you may think. Recent peer reviewed research shows that Google will return search results that vary from user to user by up to 64% (First Monday.) Or consider the difference between stories that appear in the “Most Recent” view of Facebook’s news feed as opposed to what it determines is “Top News” for you?

In a past interview with New Scientist magazine, Eli Pariser – board president of, says that Facebook and Google now act in the same fashion as editors… highly personalized editors that reinforce what we already believe and limit our exposure to new ideas. His new book The Filter Bubble addresses the concerns around these issues and offers tips for busting your own filter bubble.

– Eli Pariser: Beware Online ‘Filter Bubbles’ TED Talk

Detecting Mobile Devices — Don’t Bother

Image of mobile phone showing this site.Author: Adrian Roselli  10/11/2011

Since I started working on the web (and was slowly coaxed to the world of Netscape from Mosaic and HotJava), clients have asked me to find ways to adjust how a page behaves based on what browser the end user has. Before campaigns like the Web Standards Project (WaSP) took hold and slowly convinced web developers, and by extension clients, that the right approach is to build for standards first, web developers struggled with everything from clunky JavaScript user agent sniffers to server-side components like the browscap.ini file for IIS. These all took time to maintain and were never 100% effective.

I am thrilled we’ve gotten to the point in the web where progressive enhancement is in vogue, finally falling in line with our own practices of the last decade or so. With the advent of mobile devices and plunging screen resolutions, we have support in the form of CSS media queries to adapt a single page to multiple devices, now referred to as responsive web design. Yes, we are still struggling with the best practices and design differences (such as forgetting print styles), but the overall concept is solid. No longer must you code a text-only page, a mobile page, a printable page, and a regular page (or the templates for each if you are using a web content management system). You can build one page and let it handle all those scenarios.

Except sometimes you find yourself in a situation where you have been asked to develop a different experience for a mobile user that lies outside the ideal of responsive sites. That different experience can be as simple as sending a user to a different page on the site if he or she is surfing on a mobile device. All those years of progress are swept away in one moment and we are back to struggling with user agents. I’d like to provide a little context on why such a simple-sounding request can be such an effort to implement.


If we fall back to user agent sniffing (reading the browser’s User Agent as it reports to the server), then we have an uphill battle. Just finding a comprehensive list is an effort. One site lists hundreds of user agent strings, and there is even a SourceForge project dedicated to staying on top of them all. When you consider how many different phones and browsers there are, and how often new ones come out (such as Amazon Silk), your clients need to understand that this approach is doomed to failure without ongoing updates (and fees).

If all you do is follow Google’s advice on its Webmaster Central Blog to simply look for the word “mobile” in the string, you’ll fail immediately — user agents on Android devices do not need to conform (and often don’t) to what Google says you will find. Opera doesn’t include “mobile” in its user agent (Opera/9.80 (Android 2.3.3; Linux; Opera Mobi/ADR-1109081720; U; en) Presto/2.8.149 Version/11.10), and the browser Dolphin doesn’t even include its name in the user agent string (Mozilla/5.0 (Linux; U; Android 2.3.3; en-us; PC36100 Build/GRI40) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1 ).

You can take the inverse approach and instead detect for desktop browsers. It’s smart and simple as far as user agent sniffing goes, but still falls prey to the same problem of the constantly changing landscape of browsers. Given that the next version of Windows is intended to quickly switch its interface back and forth between desktop and mobile (keyboard and touch), unless the user agent for all the browsers installed on that device change as the user changes the device orientation, that technique is also doomed.

Serving different content based on screen resolution gets you around the user agent sniffing, but isn’t any more effective. With tablets approaching desktop screen resolution, and smartphone resolution approaching tablet resolution, there is no clear method for determining what kind of device a user has. An iPhone 4S held horizontally has 960 pixels of resolution and the Dell Streak tablet has 800 pixels (to clarify, the smaller device has more pixels, which is contrary to what most might expect). If you want a tablet to have a different experience than a phone, then serving it based on screen resolution won’t do it. As it is, the resolution of many tablets matches that of my netbook (1,024 x 600), which is definitely not the same type of device (it has a keyboard, for example).

What To Do?

Try to solve the objective earlier in the overall process — generate a different URL for mobile, embed it in different QR codes, look into feature detection, look at using CSS media queries to display or hide alternate content, and so on. Every case may require a different solution, but falling back to methods that were never reliable certainly isn’t the right default approach.

Making the Most of a Tradeshow – The Algonquin Way

The Algonquin Studios Tradeshow BoothI’m gearing up for a pretty exciting (and potentially scary) moment in my career; I’m representing Algonquin Studios at the Virginias Chapter of the Legal Marketing Association’s regional Continuing Marketing Education conference on Friday…and I’m going alone.

In my role here at Algonquin, I’ve attended other events but this is the first time I’ll do so without a wingman or, more accurately, without being someone else’s wingman. So, leading up to this conference, I’ve been thinking a lot about how to ensure I’m representing the company in the best possible way.

We work hard to maintain strong relationships with our clients and everyone here at Algonquin is expected to apply four basic principles-The Four H’s: honor, honesty, humility, and humor-to everything we do. But it occurred to me that applying the Four H’s in a tradeshow or conference setting can be a different ballgame altogether. Let’s take a closer look:

Honoring Your Client – It can be tough to truly learn about a prospect on a tradeshow floor. People are hustling from one break out session to another and there are a ton of distractions in the form of other booths, PA announcements, giveaways, even snacks. And there’s only so much time to devote to any one prospect –they’ve got other vendors they want to talk to and you’ve got other people you want to meet.

So how do you make sure you’re honoring the people you do speak with at a show? I think it’s about making sure you’re doing more listening than talking. Sure, you’re there to get your name out there but if you don’t know what your potential clients need and what their pain points really are, how can you be sure you’ll be able to help them in the long run? Getting to know your prospects, beyond a business card and an email address, is always a good idea.

Being Honest with Your Client – It can be easy to claim your company can do everything for everyone. But, let’s be honest-you’re not going to be the right choice for every show attendees’ needs. You’re there to gather new leads and turning interested parties away might seem counterintuitive to your goals, but knowing when to say “Yes, we can absolutely help you with that!” and when to admit that a different vendor or solution might be a better fit can be vital to managing the expectations of your prospects.

Maybe a booth visitor has heard great things about your company from a colleague and has stopped by to learn more about you but she has a very specific project that’s just not in your bag of tricks. It might be easy to lead her down the primrose path, letting her think you have a product or service that will be a great fit for her needs so you can try to sell her on your actual offerings at a later time, but it’s not the right thing to do.

Honesty about your capabilities might mean you lose a potential project in the short term but it also helps protect your company’s reputation as a trustworthy organization, increasing the likelihood she’ll reach out to you in the future when a project that’s perfect for your company comes along!

Being Humble About Your Work –Remember the visitor who stopped by your booth in the example above? Imagine that this time, after she gets done telling you about all the fantastic things she’s heard about you from others, she presents you with a project you know you’ll be able to hit out of the park.

While it’s tempting to bask in the praise she’s offering, it’s more important to remember that there are probably plenty of companies that can do the work she requires. Heck, they might be able to do it even better than you can. Remembering that you’re replaceable, possibly by the guy standing three booths down from yours, is not only an easy way to avoid taking any potential client or project for granted.

Share Humor in all Situations – Yes, exhibiting at tradeshows can be costly and justifying the cost to executive management often makes attendance feel like super serious business. But, if you don’t let yourself have fun at the show I think you’re missing out on a great opportunity to connect with your potential clients.

I recently attended a conference with two other Algonquin team members and within a few hours of being there, one of us got sick. Our three man team was down a man for the majority of the show and the two of us who stayed healthy had to run the ship (I should mention; it was the first show either of us had ever attended as exhibitors). At first, we panicked but then, as we started telling our booth neighbors what was up, we realized…there was actually something kind of funny about the situation. Here we were, two conference newbies, manning the booth ourselves while our experienced leader was quarantined in his hotel room. So we told a few more people and got a few more laughs. And those people told other people and eventually we had visitors stopping by just to check on us-making sure we were ok, offering advice, looking for updates on our coworker’s condition, and, yes, even asking for more information about what Algonquin Studios was all about.

Sharing the story of our coworker’s ill-timed illness helped us break the ice with other exhibitors and prospective clients. It made us human and it made us memorable.

These are the things I’m going to keep in mind later this week but I’ve got some questions for you…

How does your company or organization attack a tradeshow or conference? Do you go in with hard and fast goals-a number of new prospects to gather contact information from or is your primary focus meeting people, learning about their pain points, and engaging the people you for whom you can really make a difference? Either way, how do you make sure you achieve your goals?

Google Under the Magnifying Glass Again

Judiciary Committee

Author: Terri Swiatek 10/5/2011

There’s been a lot of scrutiny over corporate giant Google and its business practices as of late. The internet search and search-advertising company has been under fire from a number of competitors, spurring a series of Federal Trade Commission and European Union investigations over the past year. The most recent hearing, held 2 weeks ago, covered ‘The Power of Google: Serving Consumers or Threatening Competition?’   (View the full live webcast of the hearing above).

Testimony came from Google and its competitors, as well as the Senate Judiciary Committee, including: Eric Schmidt, Executive Chairman of Google Inc.;  Jeff Katz , CEO of Nextag; Jeremy Stoppelman, Co-Founder and CEO of Yelp Inc.; Thomas Barnett, Partner of Covington & Burling LLP; and Susan A. Creighton, Partner of Wilson Sonsini Goodrich & Rosati, PC.

The basic argument against Google is that, as its business interests have diversified over time, its market dominance in the search and search-advertising industries presents a serious conflict of interest. Competitors claim it’s no longer in Google’s financial interests to simply present the most relevant results to a user’s search query but to first present results that favor other Google properties and partners-where Google benefits from ad revenue. Google counters this charge, saying their goal is to always present the best answer to a search query, and if possible, to calculate and present that answer even if that means the consumer doesn’t need to click through to another site. For example, if someone searches “Macy’s,” Google’s studies indicate that, in most cases, the user is looking for a map with the location of a brick-and-mortar store; so the results page immediately displays a Google Places map. This is interesting because Google has changed, over time, from being a “GPS of the web” to a destination site itself.

The example below, recreated from one presented by Jeffrey Katz, illustrates how the relationships Google has with other businesses get preference and dominate the first half of a search results page. Paid ads are highlighted in green and Google Places and “related searches” are highlighted in red.

Washing Machine Query

I did my own, similar search query for “wedding dresses” and while three results did manage to surface to the top, the search results are still pretty Google-dominated.

Wedding Dresses Query

Google’s detractors also claim that the company has practiced improper scraping of content (Yelp’s accusations) and is using its expanding scale and volume to create unfair and anti-competitive barriers for its rivals (Microsoft’s complaints).

Interestingly enough, Jeffrey Katz (PDF) stated that 65% of Nextag’s search referrals come from Google and Jeremy Stoppleman (PDF) stated 75% of Yelp’s overall traffic came from Google, in some way. These highly successful companies are clearly benefiting from Google’s free organic listings as well as paid placement relationships, so why are they being so highly critical of Google’s business practices?

I think it’s pretty obvious that these companies trusted Google to act in a specific manner and designed critical parts of their business around those practices and technologies. These companies placed an enormous amount of trust in a single customer acquisition channel they had no real control over and now, when Google has decided to change the rules, they find themselves at a severe disadvantage. But it should be obvious that an unbalanced customer acquisition strategy can be a hindrance to any company’s sustainable growth; you wouldn’t build a stock portfolio and invest 75% of your money in just one company, would you?

In fact, Google changes the game a lot and has been doing so for awhile. According to Eric Schmidt (PDF), Google’s Executive Chairman, they change their ‘proprietary’ search algorithm slightly every 12 hours and did so over 500 times last year. When you’re playing on the home field and you happen to pay the referee’s salary, the question of a fair game is certainly debatable. But when have free markets ever been fair?

I do believe Google moved ahead of its competition because it was innovative and had the best results in the marketplace. The consumer chose Google more than its competitors, so they rose above the others. But while Google is clearly in the business of ranking, it has aggressively expanded into many other competitive areas and, while there are certainly alternative search and search-advertising companies out there, the issue comes down to scale. While Bing is Google’s biggest competitor in the US its 30% market share doesn’t even come close to Google’s 65% (ComScore). In the mobile market Google has 97% share and in the EU, Google takes the cake with 80% of regular search. At that scale it’s understandable that competitors and government entities would be concerned about reduced consumer choice, control of information, and stifling innovation. Google sits on the cusp of becoming a monopoly (Susan Creighton argues it’s not there quite yet (PDF)), which would bring the Sherman Act and other anti-trust laws into play.

Furthermore, Google’s apparent “bigness” obscures the fact that it lacks anything resembling monopoly power. Monopoly power has long been defined in the courts as the power to exclude competition or to control price . . . Google has neither power. – Susan Creighton

So end game, what can Google and the industry do to avoid intrusive and costly regulation of the internet search industry?

Over the past few years we’ve seen a trend that has the government stepping in to fix broken industries like banking and healthcare-do we really want it to end up going down that path? The panels were clearly looking to Google, and to its competitors, for suggestions of changes that could be made to avoid government interference or additional legislation and only a handful were offered. Could Google self-regulate or should there be some type of collective committee? Or would that be unfair to Google, a company that’s worked so hard to become a success? Should the government instigate more in-depth, private investigations to determine if Google is unfairly favoring the search results that make it the most money?

I don’t know about you but I certainly don’t want my search engine to become a utility; paying for a free-to-consumers service that works so well as is definitely isn’t an attractive option!

Amazon Silk, Yet Another Web Browser

Amazon Silk logo.

Amazon’s long-awaited tablet/e-reader was formally announced Wednesday, and the conversations about whether or not it will compete the iPad are underway. I don’t much care about that. I am far more interested in the web browser that it includes.

Amazon Silk is a new web browser, built on Webkit, and that is really the news of interest here. Add to that Amazon’s super-proxy approach to help users get content more quickly and efficiently and you’ve now got a new pile of potential chaos as a web developer. It’s far too early to tell how this will shake out, but in a client meeting Wednesday I already had to address it, so I think it warrants a little context for the current state of browsers so we can consider potential developer impact.

Amazon posted a video on its brand new blog to provide an overview of Silk (with an obligatory Geocities reference):

The 400+ comments raise some questions that tend toward a common theme — in the absence of a technical explanation, when can we get our hands on an emulator? Granted, there are plenty of comments about privacy, security, and some wild speculation, but the theme is clear.

As a web developer, I can tell you that we all feel overburdened with the assault of browsers we have out there already. We can champion the ideal of targeting the specs, not the browser, but when the clients call to complain about a rendering difference, not even a problem, on another browser it can get pretty draining. As Silk comes to market we’ll need to account for it, its hardware configurations, and its coming release versions (within reason, of course).

For some context about the burden we already have, yesterday Google Chrome developer Paul Irish wrote that, only taking into consideration Internet Explorer for desktop, we’re already on track to need to support 76 versions of just Internet Explorer (including version 8) through 2020. There are some broad assumptions in his article regarding how people will use the IE document modes, but the potential is still there. Add to that the new release schedule of many browsers (Firefox has gone from version 5 to 7 in ~90 days), and then pile on the browsers available for mobile devices, and we’re already at well beyond the number of variations of browsers that we had to support even in the heyday of the browser wars.

But Silk isn’t just a web browser — it’s got a super-charged proxy server that will compress images, compile JavaScript into its own machine-readable format, and batch files into a singular, smaller download. While this is nothing new (Opera Mini has done this for some time on mobile devices), Amazon’s implementation raises the hairs on the back of my neck when I think about all the years I’ve had to troubleshoot web applications because proxy servers are caching files, munging JavaScript, brutalizing images, and generally gutting the real-time features that the web had been moving toward more and more. I don’t know if this will happen with Amazon Silk, but given my experience with Opera, proxy servers, and users in general, I am filled with apprehension.


Opera responded to the Amazon Silk announcement with its own explanation of how its own “cloud-compression” technology works:

Picture of web pages being process by HAL 9000 and delivered to Borat.