Is XML too ‘verbose’?

As one of the most important and comprehensive languages for encoding text, XML has enjoyed great popularity as the basis for the creation of structured writing techniques and technologies. Yet even with continuous refinement and the broad adoption of data models like DITA for authoring and publishing, XML poses major challenges, especially when compared to so-called plain-text-formatting languages such as Markdown.  Many, like The Content Wrangler’s Mark Baker, are criticizing XML for its perceived limitations.

“XML’s complexity makes it hard to author native content.”

A ‘verbose’ language
In a post bluntly titled “Why XML Sucks,” Baker says that, while performing a vital function as the basis for structured writing systems, XML’s tagging – which he says makes XML “verbose” – inhibits author productivity.

“If you write in raw XML you are constantly having to type opening and closing tags, and even if your [XML] editor [application] helps you, you still have to think about tags all the time, even when just typing ordinary text structures like paragraphs and lists,” said Baker.

“And when you read, all of those tags get in the way of easily scanning or reading the text. Of course, the people you are writing for are not expected to read the raw XML, but as a writer, you have to read what you wrote.”

The absence of absence
Baker hangs a lantern on the issue of whitespace. He cites the original purpose of XML (“XML was designed as a data transport layer for the Web. It was supposed to replace HTML and perform the function now performed by JSON. It was for machines to talk to machines…”) as the reason why whitespace has no meaning in XML.

And what’s the big deal about whitespace? Says Baker, “…in actual writing, whitespace is the basic building block of structure. Hitting return to create a new paragraph is an ingrained behavior in all writers….”

He goes on.  “This failure [of XML] to use whitespace to mean what whitespace means in ordinary documents is a major contributor to the verbosity of XML markup.  It is why we need so many elements for ordinary text structures and why we need end tags for everything.”

“XML performs a vital function…”

No ambiguity
While all this talk of verbosity and whitespace may seem fairly damning to the future of XML, the truth is that it serves a fundamental purpose, a “vital function,” as Baker puts it, that lots of people use and which contributes to its longevity. As one Charles Gordon of NetSilicon said in 2001, XML is “…a tool that concisely and unambiguously defines the format of data records.”

The “unambiguous” aspect is particularly important. While Baker may lament the loss of readability when viewing XML-encoded content in its raw form, the fact that XML requires authors to make conscious decisions about the structure of what they’re writing – even the placement of whitespace – makes every line purposeful. XML is ideal for communicating with unambiguous intent, which is the precise purpose of structured writing systems and rule-based content architecture. Raw XML is indeed verbose, but its general simplicity has made it a building block of so many improvements in technical communication that its use endures and even flourishes to this day.

3 possible pitfalls in a content management system

Building a streamlined curation process allows owners, authors, editors and even users to fully engage with content in a manner best suited to each person’s needs. However, even the best laid plans will go awry if oversight responsibilities are murky or absent. The problem simply gets worse as the volume of data-rich hypertext content increases. Curators must maintain thorough reviewing processes to verify that the underlying data is of value. Here are just a few of flaws that inhibit effective content curation.

“Underpinning each pitfall is a missing aspect of oversight.”

Unclear ownership
As a foundation for establishing acceptability, authority, and proper editing privileges, a  robust curation strategy requires a system that maintains ownership of content on a granular level. Otherwise, the system gives rise to “orphaned” content; i.e., content exposed to an editorial gap (because it has no owner) that can result in inaccuracies.

Lack of coherent review stage
While it may seem obvious that a review stage is needed for effective content curation, a significant issue is where review should occur. Should individual data components be subject to review and approval? How much should generated content be subject to peer review if those reviewing it have similar editing privileges as the author? The placement of review stages in your curation processes is the essence of content “management” at every phase of content lifecycle.

Doesn’t consider design and formatting
Content curation programs put extensive thought into information architecture but comparatively less attention to the end-user’s experience with the content. This can lead to the selection of a component content management system (CCMS) that does everything expected of it while producing content that is fundamentally not user-friendly. Unless the CCMS integrates end-user presentation into its operating capability, even the most complex CCMS can miss the mark.

Putting together a content lifecycle strategy

For marketers, creating compelling content that connects with the intended audience is the main push of their daily work. But once this content is created, what happens next? How will it be disseminated, redeveloped and warehoused for future use?

“Marketers who have developed a strong content lifecycle have a leg up.”

Content: No longer disposable
Marketers who have developed a strong content lifecycle have a leg up when it comes to managing their materials and potentially reusing it for later campaigns. Columnist Robert Norris recommends the development of lifecycles to help craft content that resonates with different groups of customers and can remain effective across a variety of channels. To do this, he advocates moving away from treating content as a disposable material and toward viewing content as a living, evolving entity worthy of attention and careful consideration.

“Critically, we realize that these audiences have very specific needs for which we have the expertise—if not yet the processes —to craft and maintain targeted knowledge base resources,” Norris writes in The Content Wrangler. “Moreover, we recognize that the task of creating and publishing these resources must receive the same diligent attention to detail that we apply to our goods and services because poor publishing reflects upon our credibility just as harmfully as does a poor product or service.”

The content lifecycle
To ensure that content reaches its full potential, Norris proposes a lifecycle based on constant evaluation and redevelopment. The steps he puts forth include:

  • Production – Where content is developed, based on existing data components.
  • Approval – Content is reviewed and vetted by editors and administrators before being slated for release.
  • Publish – Content is configured and fully optimized for a publishing platform, as well as made discoverable by adding meta-data and setting prominence.
  • Curate – Ancillary resources are integrated into the content.
  • Improve – Feedback, telemetry and analytics are used identify and address successful aspects as well as deficiencies in the content. Once identified, the content is tweaked to address these pain points.
  • Re-certify – An often missed step, data used in content must be reverified periodically to ensure it is still relevant and accurate based on more recent findings.
  • Update – Aside from recertification, consideration of timeliness and cultural relevancy can warrant changes from minor updates to major revisions.
  • Retire – Once a piece of content has reached the end of its relevancy, archiving it is warranted. Make sure the content and its metadata are tagged for ease in locating it later.

With a hypertext-based content paradigm like DITA, this lifecycle is made even simpler by being able to evaluate and repurpose content on an XML component level. Analytics can show the efficacy of a single data element, and automation driven by content tagging can streamline campaign variations to audience segments to gauge impact. From there, each element of the lifecycles is a chance to refresh and swap metadata into more compelling content.

Defining and implementing ‘Transcreation’

Creating effective globalized content is much more than simply translating text. The context in which text exists forces, in many ways, the creation of new content that has meaning only within that context; the greater the number of contexts, the greater the number of translation-induced content changes.

Translation-induced content creation, or "transcreation" forms a crucial part of your localized content strategy. Let's examine transcreation in greater detail and see how it factors into your fully-realized content strategy.

"What exactly is transcreation?"

Defining the cultural lines 
Transcreation improves on word-to-word translation through a top-down focus that gives greater value to content meaning and navigation, and which harnesses cultural norms to convey ideas in a way that avoids the pitfalls of word-to-word translation.  As such, the mechanism at its core is conceptual rather than literal.

Why this emphasis on concepts versus specific materials or language? Because some linguistic structures for conveying ideas do not function across regional lines. Cultural interpretations can vary for even the most essential building blocks of language, as pointed out in a study published by the American Psychological Association. Examining the way that different regional and cultural groups interpret facial expressions, lead researcher Rachael E. Jack commented that "East Asians and Western Caucasians differ in terms of the features they think constitute an angry face or a happy face."

"Our findings highlight the importance of understanding cultural differences in communication, which is particularly relevant in our increasingly connected world," Jack told the APA. "We hope that our work will facilitate clearer channels of communication between diverse cultures and help promote the understanding of cultural differences within society."

Breaking it down to build back up
This underscores the importance of transcreation.  In our quest to convey content's "true intent" and not be stymied by cultural differences, we must break it down to its component parts and reassemble it locally so as to create the most compelling and clear messaging. For example, look at Coca-Cola's most recent company slogan, "Taste the Feeling". As a global brand, that slogan will be translated into any number of languages.

"Transcreation takes content as written and breaks it into component parts."

Now consider the problem of establishing that slogan in a locale that does not emphasize "feelings" or that considers it shameful to express an excess of emotion. In this context, a direct translation, which would render the equivalent of the words "Taste" and "Feeling", would not reach its audience with the conceptual meaning that Coke triggers a visceral, joyful response.

Transcreation takes a different approach.  Creators and content managers first break the content into its component parts. Next, they tag the content with signifiers, allowing the data to be parsed into components that are pertinent to the target culture. The tagged content is fed to localized content management teams in the form of a creative brief. These teams, complete with their own culturally tagged data, reconfigure the basic content building blocks into new – yet derivative – content. 

Can automation break down cultural barriers?
While transcreation is already being used to great effect in companies worldwide, it is largely a manual process.  Nevertheless, automated transcreation is on the horizon.  Smart websites already use localization parameters to reconfigure formatting elements, swap images and insert culturally specific elements. Research teams are tackling the problem of cultural computing with sophisticated algorithms that may one day emulate human thought patterns, allowing automated transcreation to be a seamless and instantaneous process.

How DAM impacts content management

Content management is all about the on-demand assembly and reconfiguration of information modules into new products – either autonomously or under human supervision. In a world that accepts the notion of "fair use" of copyrighted material, it should be relatively easy to repurpose information modules, the only limitations being those of limited imagination (machine-driven or otherwise) or of limited technical capability. It is ironic, then, that the rise of regulations and commerce tied to authorship should have a complicating impact on CMS development.

"DAM handles one granular aspect of content – authorship."

This is where Digital Asset Management (DAM) influences the world of content management tools. DAM handles one granular aspect of content – authorship – and concerns itself primarily with enforcing copyright protection. A DAM system functions by tracking the use of copyrighted material and flagging improper, unauthorized or unattributed use. A digital media asset is entered into the DAMS in the form of a high-resolution "essence" along with detailed metadata about the asset.

From there the DAMS can be used to pull logged materials as needed and identify uses of the asset, flagging a violation of copyright – or, as a secondary function, ensuring that the copyright owner is compensated for the authorized use of the asset. This can be a crucial revenue stream for authors and copyright owners, though it may also become complicated once an asset has been combined as a module into other pieces of information.

When DAM and Content Management are combined, the CM system has a broader scope than a DAMS and largely does the heavy lifting of content assembly.  A content creator working in a CMS can pull digital material from a DAMS. A content curator may choose to push finished content from a CMS to a DAMS.

The value – and dangers – of volunteer editors

Our culture is undergoing a rapid “wikification,” where nearly every form of content imaginable can now be edited and reconfigured by members of a community. Supported by technological innovations that make remote editing easier to track and implement, legions of volunteer editors have emerged to help authenticate, structure and moderate the vast quantity of content available to the public. Indeed, Wikipedia benefits from the volunteer services of over 80,000 editors who comb through the more than 38 million articles hosted on the site, verifying data and flagging errors. These editors are unpaid, largely anonymous and not required to have any kind of formal training.

Why would people donate countless hours to manage content to little or no acclaim? And a deeper question: Are these volunteer editors a boon for content managers or a danger?

“These editors are unpaid, anonymous and not required to have any kind of formal training.”

Motivations and psychology 
From an economic perspective, the decision to employ volunteer editors is a no-brainer. If an organization is able to obtain free editing services, it can redirect resources to other capital-intensive projects. There is even a qualitative argument to be made based on the “crowdsource” aspect of community editing – that, by opening up editing privileges to the community at large, the ability to quickly verify and validate date on a grand scale is possibly without having to wrestle with outside interests and intellectual gatekeepers.

To understand the value of volunteer editing, it helps to start by examining the motivations of a person who engages in it. Looking at the question of why people volunteer to edit for the site, the Behavioral Science and Policy Association recently conducted an experiment to see what factors motivated community editors working within Wikipedia. BSPA randomly assigned certain editors within the German language Wikipedia community a “Edelweiss with Star” badge, which could be displayed prominently on a user’s profile or otherwise hidden. Members of this badged community exhibited higher rates of editor retention – over 20 percent after one month and 14 percent after two months.

While this would lead some to assume that public recognition could boost editor retention, the experiment found that only about 6 percent of the badge recipients opted to display the badge publicly – implying that recognition with the community may not be a strong driver of retention. This led study author Jana Gallus, a postdoctoral fellow in the Behavioral Insights Group at Harvard, to speculate that each editor’s feeling of belonging to a community may drive people to volunteer in such high numbers.

Edits attract edits
Then there are the dynamics of the community/volunteer editing process itself. Stephan Seiler, an economist and associate professor of marketing at Stanford Graduate School of Business, and Aleksi Aaltonen, an assistant professor of information systems at Warwick Business School, studied the editing patterns of nearly 1,310 articles over eight years. Articles that were community edited, they found, tended to be edited frequently, attracting new edits and editors like magnets. This they dubbed the “cumulative growth effect,” which basically means a snowballing of content editing that occurs once a prepopulated article attracts the attention of editors – which in turn begets more edits.

“Simply putting [an article up] up and hoping that people will contribute won’t work,” Seiler told Stanford Business. “But any action that increases content can trigger further contributions.”

“Inaccuracies can become increasingly hard to track and invalidate.”

The dangers of volunteer editing
It’s not clear that the “cumulative growth effect” is necessarily a good thing. One of the much lamented aspects of Wikipedia is that it can be edited – and reedited – at the whim of almost any registered user. This has led to an unknown number of hoaxes, frauds and vandalized pages – inaccuracies that can become increasingly hard to track and invalidate if they are used as sourcing for journalism or academia.

This is the essential danger of volunteer editing: without requiring specific qualifications – and the ability to meaningfully penalize or incentivize edit quality – inaccuracies are likely. Even in its most benign occurrence, a simple error or misunderstanding can taint the validity of a piece of content. At worst, you can get intentional and malicious obfuscation of data. This has kept many content management organizations from adopting the full-scale community editing capabilities pioneered by Wikipedia.

Copyright and trademark in content management systems

 

Given that much of content creation is oriented around the construction of instructional or technical documentation, the issue of copyright and trademark is often not considered when designers and authors are working with a content management platform. Yet the ubiquity of existing content and data, as well as changing regulatory guidance and commercial interest in copyright holders has made this an important consideration.

“Even within technical writing, an eye for copyright must be observed.”

Even within technical writing, an eye for rights must be observed. In laying out his guiding principles for an effective content marketing strategy, Columnist Robert Norris writes in The Content Wrangler that organizations should make sure that “copyright is respected, intellectual property is protected and digital record retention is prescribed.”

Specifically, he cautions that an enterprise publishing initiative must make “a commitment to integrity and record-keeping” by archiving source material. Still, this in and of itself can become problematic if the XML content management platform cannot properly handle copyrighted or trademarked material. If this material is treated the same as public domain materials or common knowledge in an automated curation or authorship process, copyright infringement can occur and be disseminated without being flagged ahead of time.

This underscores the importance of creating “fair use,” permission and citation protocols in your content authoring processes, and ensuring that the XML CMS supports these protocols. By flagging copyrighted content and adding hypertext data about the copyright holder, when building a new piece of content, sourcing copyrighted or trademarked content can be made subject to tiered rules, prompting authors to reach out to the copyright holder for approval or automatically adding copyright notices to documents.

Furthermore, the more sophisticated XML content management platforms allow writers to repurpose data contained within copyrighted content and build content from there. The legal guidance on copyright holds that information – unless a trade secret or somehow proprietary – is not protected; copyright protection applies to the way the information is expressed. This makes component-based authoring paradigms like DITA useful for coding content from a particular source and identifying which pieces of information are subject to proprietary protection.

Approaching sourcing from this angle, however, requires a significant amount of capability in the content management platform; hence many CMSs (particularly within education) opt more for linking rather than importing copyrighted content wholesale.  When linking is not an option, though, importing copyrighted content into an XML CMS like Astoria requires proper labeling and role-based access designations to avoid the legal hazards surrounding access to copyrighted content and protection against exposure or misuse by parties that are not authorized to work with the material.

Overcoming the issue of scaling

Developing any high-level content management architecture in the age of big data has uncovered hidden challenges, ones that engineers and program designers never could have anticipated in previous years. While our computer storage and processing capacities continue to expand, the sheer volume of data being produced has led to many computing issues relating to scale.

The von Neumann bottleneck
Even the most sophisticated CMS available on the market cannot hope to handle every single piece of content being churned out. For standard personal computers, this leads to what experts have dubbed the von Neumann bottleneck. As Astoria Software CEO Michael Rosinski recently discussed with The Content Wrangler's Scott Abel, this refers to a common latency issue occurring when discrete processing tasks are completed linearly, one at a time.

The von Neumann bottleneck is easy to understand.  Even though processors have increased in speed and computer memory has increased in density, data transfer speeds between the CPU and RAM have not increased.  This means that even the most powerful CPU spends as much as 75% of its time waiting for data to load from RAM.  CPU designer Jonathan Kang goes even further, claiming that this architecture model results in "wasteful" transfers and implying that the data need not be retrieved from RAM in the first place.

"To scale effectively, implementing predictive algorithms is necessary."

Predictive and intelligent
The solution, as Mr. Kang sees it, is to associate data with the instructions for its use.  In that way, as the instructions move into the CPU, the data pertaining to those instructions moves into the CPU cache, or is accessible through an alternate addressing channel designed specifically for data.

Another approach, and one more amenable to large sets of data, is to preprocess the data as it is ingested.  We recognize that not all content will be of high value in a particular set – most of it may in fact be of relatively low value – so the ability to approach content management with a sense of data relevance allows programmers to apply CPU and RAM resources to the data with the highest value.

This is at the core of intelligent computing – and intelligent content. Since existing hardware architectures contain limits a computer's ability to transfer data, there is much to be gained by creating data ingestion programs that are able to mimic a human's ability to determine data relevance and recognize content insights – techniques that overcome latency by quickly and efficiently pinpointing only the data we need.

"Every time a piece of intelligent content is processed, the machine 'learns.'"

Content versus computing
Fully intelligent computing (essentially a form of AI) remains elusive, but within the realm of content management, great strides are being made every day. One of the biggest innovations is the changing of approach from placing the full burden on computing to integrating "intelligent" features into the content itself. With the more complicated architecture content languages like XML and DITA, we can add extensive semantic context, aiding the CMS by tagging and flagging data relevance. Every time a piece of intelligent content is processed, the machine "learns" patterns and uses these patterns help deal with issues of scale.

Over time, the combination of machine learning and structured, intelligent content will lead to faster, more accurate decision-making and the ability to keep up with a constant influx of new data. It can connect multiple data sets, recognizing common metadata tags across platforms, devices and channels and making data aggregation easier. This will have an immense impact on all industries, from retail to customer service to education to medicine.

Segmentation as a key to personalized content delivery, part 2

Welcome back to the second part of our series on ways to segment audiences to ensure a customized, pinpointed content experience for each user. Last time, we reviewed the various considerations and rules that govern segmentation. In this installment, we describe an implementation plan for effective segmentation and incorporating those rules into content classification.

Capturing segments and delivering content
The first task to establishing effective user-facing content is to identify your audience, both intended and actual.

"Develop your audience baseline; i.e., your assumptions about your audience."

Early on, develop a clear strategy built around an ideal user profile, broken down demographically, behaviorally and psychographically. Prior to content delivery, this will act as your audience baseline, i.e. who you think your audience is. Do not treat this ideal user profile as gospel; you may find – once you are up and running delivering content – that your actual user is very different from what you envisioned.

Having set your audience baseline, tag your content according to the anticipated values and interests of each segment demographic. Ideally, this is built off extensive market research or experiential knowledge of these particular users. Again, nothing about this initial tagging should be considered set in stone, since even the most knowledgeable industry experts may have their initial findings challenged when expanding out to the broader audience. In fact, your content management system, such as the Astoria Portal, should allow you to build in rule overrides or retag content that has been published already.

From there comes capturing segments. There are a variety of methods to accomplish this task, from market research to advance data aggregation and analytic tools, but one of the most effective methods is simple self-selection. Make the interface intuitive enough for users to identify their segments, and have this selection drive their user experience.

Tagging and retagging
An example of this in action could be a website gateway for educational materials where the landing page prompts visitors to identify themselves as teachers, administrators, parents or students. After clicking the pertinent segment, the user is then sent to a customized portal, with targeted content delivered directly to them. Alternatively, a site may have a mailing list signup where users are prompted to input demographic and interest data (age, sex, location, how often they buy certain products, how much they spend), which can then be used to automatically deliver content to each email subscriber based on their personal preferences.

"Are unexpected content requests coming in?"

Once users have started interacting with your content, the true test of your segmentation rules and category-level metadata begins. This can be determined by looking at user behaviors: Are users spending more or less time with some content compared to others? Are unexpected content requests coming in? Regular auditing of your metadata tagging may help pinpoint misclassifications, evolving user needs or even create more granular tagging rules.

Keep segments clean and segregated
It is critical that the user experience be as intuitive and streamlined as possible. When it comes to delivering customized content, start with a common set that works across all the different user experiences, sharing some of the same essential data and supplementing and restructuring the experience based on the user persona. Avoid prompts that could lead users away from the central content hub and instead try and have content flow inward. A good technique is a prompt like "Read more", which can expand content on the current page with supplementary materials.

By having audiences self-identify based on their content preferences, you can match metadata tagged content while measuring results versus expectations. This can offer vital insight for content managers about how effective tagging systems – and the content itself – serves the intended audience segment.

Segmentation as a key to personalized content delivery, part 1

When it comes to delivering personalized, localized and – most importantly – relevant content to audiences, segmentation is a key concept. Segmentation, in this context, does not refer to the division of text into translatable chunks.  Instead, it refers to the classification of content consumers according specific parameters.

There is a compelling business need for incorporating segmentation into an information architecture: helping customers make purchase decisions about the products and services described in your content. The underlying problem is the evolutionary expansion of technology; namely, as storage capacity expands and content management capabilities grow more sophisticated, the volume of data under active management also expands. Yet, brands, companies and other content providers must continue to deliver only the most relevant content to their customers while screening out irrelevant data that can otherwise disable crucial decision-making when it comes to making purchases.

The solution to this problem has two parts. The first action, discussed in the following paragraphs, is to develop effective, defensible rules for segregating the people who read your content. The second part, which will be discussed separately, is to incorporate those rules into the way you classify your content.

"The first step is to building segmentation is to develop audience personas."

So how do you develop appropriate segmentation rules? The first step is to build audience personas, defining the qualities and interests that will route your audience to different pieces of content. Building these simply from end-user IP addresses, however, is tilting at windmills. A better segmentation strategy focuses on a few primary areas where audiences offer distinguishing characteristics and then sorting these characteristics into their respective content channels. Here are candidate primary identifiers for segmenting your audience.

Geographic
While it's asking the impossible to build audience personas solely from IP addresses, it is nonetheless true that IP addresses offer some meaningful guidance to the task. For example, you can know whether or not to deliver translated or localized versions of your content.  You can pinpoint cultural signifiers and traditions that may shape how your audience interacts with content. Then there's the fact that IP addresses help you identify key geographic zones of influence, be they broad measures such as a continent or a country, or more precise identifiers such as a region, a county or ZIP code. While our increasingly interconnected world has broken down the barriers that once defined a geography, different areas can still drive audience behaviors and thought in a way that requires segmenting.

Demographic
From the "where?" of geographic identifiers, demography identifies the "who?" that makes up your audience. Demographics can take on a nearly infinite array of attributes, from gender to age to national origin to economic status and income. Probably the most immediately relevant is occupation, since the services being offered can be specific to a certain audience within a single company (for example: HR supervisors versus regional managers).

Demography identifies the "who?" of your audience.

Behavioral
Behavioral attributes are simply the answer to the question, "What actions does this audience segment regularly perform?" This method examines patterns of behavior: where people shop, what they buy, what kind of web pages they look at and for how long.

Psychographic
Psychographic segmentation is one of the more nuanced approaches to audience segmenting since it takes data from geographic, demographic attributes, and behavior sources to synthesize a psychological profile. This profile, though, is less about identifying patterns in what people do and more about identifying what and how they think. Hence, a profile subjected to psychographic segmentation focuses on the following attributes:

  • Lifestyle and personality. Beyond their behavior, audiences identify themselves in accordance with the aspects that have the most meaning in their lives. This can be a self-identified interest derived from behavior but distinct from it. A person may identify with and fit the profile of a "Harley-Davidson bike owner" even without purchasing a motorcycle. A sustained interest in the culture and ephemera related to Harley-Davidson ownership is enough to accurately capture this audience segment and deliver customized content. 
  • Values, attitudes and opinions. Values, attitudes and opinions provide the framework of thought and perspective, which drives an emotional response to stimuli.
  • Social class. Class consciousness plays a big role in psychographic identification. Class differs from lifestyle in that "class" describes an inherited set of rules (acquired through family or peer-group interactions) governing where people exist and how they relate to others in various classes, whereas "lifestyle" describes a set of chosen interests. A prime example is the upper-class young lady looking for information on handbags who is driven by the luxury standards of her social circle, as compared to a middle-class young lady looking for information on inexpensive, rugged alternatives. 

Next time, we will explore the ways to most effectively and accurately identify audience segments and funnel them into your content.