How DITA and XML facilitates managing regulation revisions

Since the start of publicly distributed legislation, the U.S. government has sought to make it as easy as possible to read and distribute non-classified government documents. This has led to the sometimes tentative embracing of content languages as they are developed and deployed throughout different industries. In June 2016, House Speaker Paul Ryan (R-WI) spoke to attendees of the 2016 Legislative Data and Transparency Conference and emphasized the importance of translating all legislative measures into a standardized formatting like XML. Ryan framed this an an effort to promote governmental transparency.

“Now we’re working to go further, and publish even more current and past documents in XML,” he told the assembled. “I’ve asked our team to keep moving ahead by publishing all legislative measures in a standard format. That means enrolled measures, public laws, and statues at large. We want this data to be as accessible as possible throughout the legislative cycle.”

“The goal is simplicity – something that XML models excel at.”

Keeping It simple and accessible
As with all forms of communication, the goal is simplicity – something that XML and DITA models excel at. The Federal Register affirms the importance of making regulations readable with stylistic guidance on how to author legislative documents.

“Readable regulations help the public find requirements quickly and understand them easily,” writes the Register. “They increase compliance, strengthen enforcement, and decrease mistakes, frustration, phone calls, appeals, and distrust of government. Everyone gains.”

This focus on compliance and limiting confusion – and the accompanying administrative nightmare – is a key way that DITA and XML can make legislation and regulations less of a hassle. Since the law governing any particular industry is a living document – made up of countless, frequently revised laws that dictate everything from tax codes to prohibited transactions – ensuring that documents are not only accessible but also find their way in to the most relevant hands can be easier said than done.

A ‘quality control nightmare’
This was a particular challenge that Chris Drake, the deputy legal counsel to Connecticut Governor Dannel Malloy, identified and sought to make less troublesome. In 2014, prior to Speaker Ryan’s comments – Drake and the governor’s office attempted to launch a “e-Regulation” program, moving away from the traditional and inefficient paper-based authoring process into something that would allow users to more easily interact with legislative content.

“Some agencies didn’t know where the most recent text-edited version of a regulation was,” Drake told GCN. “It was a quality control nightmare. We needed a system that was more transparent and accessible.”

“Lawmakers may be less than experienced with content authoring platforms.”

This e-Regulation system was pioneered with the help of Fairfax Data Systems to convert PDFs into DITA XML, with the goal of over time authoring legislation directly in XML so as to limit potential conversion errors and inefficiency. This in and of itself posed a challenge: Lawmakers and their staff are typically less than experienced with certain content authoring platforms, making it a steep learning curve. To compensate for this, the e-Regulation initiative focused on breaking authorship into a two stage process, with the first stage relying on automation.

“Extraction is a mostly automated process,” Mark Gross, president and CEO of DCL, a company also assisting with the conversion, to GCN. “The trick is to do it in a consistent manner, which is not that easy.”

Following extraction, the documents were edited and approved in XML draft form by humans. While still time and resource intensive, the process will in the long run save countless hours of having to convert documents into new formatting over and over again.

“If we had tried this six or seven years ago we might not have been able to find a solution that does this,” Drake said.

The ongoing value of DITA legislation
Of course, beyond accessibility, the virtue of an XML content framework is the ability to integrate live regulatory changes into existing and future content. As legislative content is converted into DITA, each element becomes a component. If a law is amended, changed or struck from the books, the components of that law as it relates to technical documents like work manuals and safety training can be automatically reconfigured to match the most up to date regulatory guidance. With agencies like the Occupational Safety and Health Administration on board with publishing all guidance in DITA, both companies subject to the regulations and the regulators themselves can work on the same page, without requiring extensive redrafting every time the law changes.

The impact of corporate interests on content development and marketing

The idea that we live in a world driven by niche interest – particularly when it comes to the creation of new content – may not reflect the whole picture. True, the content landscape is broader and more multivariate than its ever been. Yet amid this niche content renascence, the pressure to monetize this content has never been greater, leading to the encroachment of corporate influence.

As part of an announcement related to its 2017 layoffs, Medium, the online publishing company started by Twitter co-founder Ev Williams, described how ad-driven online media is a “broken system” and how this is undermining the company’s bottom line. Williams took to the company’s blog to defend the layoffs as a step away from the ad-driven business model and a means of renewing the company’s focus on content.

“The vast majority of articles, videos, and other ‘content’ we all consume on a daily basis is paid for — directly or indirectly — by corporations who are funding it in order to advance their goals,” Williams wrote. “And it is measured, amplified and rewarded based on its ability to do that. Period. As a result, we get … well, what we get. And it’s getting worse. That’s a big part of why we are making this change today.”

A detriment to content … and society? 
While Williams remains coy about how exactly Medium will be shifting its business model to rely less on ad dollars, he isn’t alone in his assessment that ad-driven material may have a negative impact on the quality of content. Speaking to Harvard Business School’s Working Knowledge blog, Feng Zhu, assistant professor of Business Administration at Harvard had equally strong words about the impact of ad-driven content creation models. 

“Ads may have a negative impact on the quality of content.”

“Many media scholars think this revenue model is detrimental to society because it provides incentive for the content provider to produce only popular content that can attract lots of eyeballs,” said Zhu. “Content providers are serving advertisers rather than the audience, and consumers with niche preferences will be out of luck because the content they’re seeking only caters to a small group of people.”

Zhu, alongside fellow researcher Monic Sun, sought to study the impact of ad-revenue-sharing programs on bloggers and content creators. Looking at a data set from a leading Chinese media website that offers a range of services, including blogging, Zhu and Sun were able to compare posts written by authors taking part in an ad-based profit model versus ones who did not.

Comparing the two populations, Zhu and Sun were able to determine that the posts supported by ad revenue showed a significant uptick in content focusing on “popular” topics, such as the stock market, salacious content and celebrities. Interestingly, while the topics became more culturally homogenous, the ad-supported blogs were typically longer, published more frequently and included more photos and video clips than those not ad-supported.

What can we take from this data, as well as the warnings issued by Williams? The lesson here may be that content backed by advertising facilitates a certain level of depth and innovation not easily achieved without some form of sponsorship – yet this comes at a price. The key for content creators and advertisers looking to work together and leverage a content strategy is identifying the niche they are writing for and determining the demand and – ideally – value of the content before bringing it to market.

Continuing trends in content language and marketing

Content languages and content marketing are poised to enter a highly competitive space. While previous years have seen much of content creation and management innovation focused on generating volumes of content, the past two years in many ways marked a shift in the market.

Content volume – without a meaningful way to parse and verify data – has led to the the challenge of finding relevant, useful content for specific audiences. The burden of parsing, packaging and distributing content has traditionally fell to marketers, yet the increasing role of sophisticated automation is helping mitigate the near unmanageable influx. With that in mind, we see several distinct content-related trends taking shape in the near future.

Increasingly effective segmentation
The struggle to match relevant content with certain populations remains at front of mind for content designers and marketers. In many ways, niche is the new norm. The issue in modern times is rarely, “Is there content that caters to a specific audience?” Rather, more often than not, the question becomes, “Can this content be located and delivered?”

In response, audience segmentation has grown from an auxiliary tool to aid in distribution to a bedrock of content management and marketing. While traditional conversion rates for content remain low, as content segmentation gets more sophisticated and automation drives increased speed and responsiveness, marketers are reporting that they feel their efforts are more successful. According to data from the 2017 B2B Content Marketing Benchmarks, Budgets and Trends report published by the Content Marketing Institute, over 60 percent of B2B and B2C content marketers say their efforts are are “much more” or “somewhat more” successful than last year.

Multimedia on the rise
Over the past few years, our definition of content has changed dramatically. This in turn has driven content language design, expanding the way we integrate data modules and other features into responsive, dynamic content.

“Social media has made a massive push to embrace new multimedia content.”

With social media companies doing a massive push for users to embrace new multimedia content generation – think the wide-scale rollout of Facebook Live and announcement of live streaming functionality being developed for Instagram – businesses must contend with the prevalence of multimedia as an effective communication and collaboration tool. With that in mind, the ability to parse and distribute non-written content like video, memes, GIFs and other popular forms of media is going to be a crucial challenge for content designers going forward.

Mobile dominance
There’s nothing new about how mobile devices have upended the market share of desktops. Yet the way that mobile is being used to access content is shifting in subtle ways. According to comScore, 2015 was the year that saw smartphone app usage inch closer to half of all digital media time spent.

This has big implications when it comes to CMS design. Specifically, it means that – in addition to auditing content and content delivery for content on mobile – content markets and managers need to look at the ways that proprietary apps filter, deliver and format content.

User-to-user content
One of the more fascinating trends we’ve seen in the past few years is how people interact with user-generated content. According to AdWeek, 85 percent of people trust content made by other users more than content generated by businesses or brands. As such, they are almost twice as likely to share it with friends and family.

“Maintaining branded content still remains a challenge.”

This will have a profound effect on the influx of new content in coming years. Simply put, brands and businesses are turning to users to create content for other users. To fully leverage this, the brand must design a CMS platform that users can easily interface with, prompts users to create content – a la Facebook’s “reminder” notifications – and still be maintained by the brand.

While companies like Facebook have led the way in this arena, maintaining content that falls within brand guidelines still remains a challenge. This something that automated metadata tagging will help with, allowing the content delivery platform to more quickly and accurately identify content outside its branded guidelines.

Pay to play
While not directly connected to the development of content, the slow but steady rollout of paid promotion, at the expense of organic reach, is having an impact on the way that content delivery platforms are being designed. Foregoing its traditional chronological feed, back in 2016 Instagram introduced an algorithm-based feed, marketed as a way for the company to deliver content based on “the moments we believe you will care about the most”

For many in the world of content creation and management, this kind of vague language combined with a secretive and proprietary algorithm does not bode well. Because recent data from Social@Ogilvy shows that for Facebook pages with more than 500,000 likes, the average organic reach has fallen to about 2 percent, many are worried that this evolution is inevitable for all content on social media platforms.

“I think marketers are just accepting the fact that Facebook is a place where you pay to play. The whole story of building a community is not operative anymore,” MEC North America Head of Social Noah Mallin told Advertising Age. “For building a brand story, you really have to do it with a budget.”

As each of these respective trends shapes the world of content management and marketing to come, the key for those in the world of content design to remain abreast of things as they evolve. Technology is likely to only speed the rate of innovation, meaning that a top priority should be to remain agile and response to market demand.

How content is fed and influenced by attribution modeling

The standard configuration of most content generating organizations has authors on one side and marketers on the other. This division of labor facilitates the creation and distribution of content, yet fundamentally both sides serve the same ends: to create compelling content for a particular audience.

Attribution modeling – the process of determining the most effective pathways that deliver desired results – feeds content creation and influences authors in several distinct ways. While ostensibly a marketing diagnostic and analytic process, understanding how content drives valuable operational metrics like conversion, retention and sales leads can help authors and reviewers better tailor content for audiences.

Single-touch attribution
There are several varieties of attribution models that marketers focus on, each with its features and ways to reflect on content. Nevertheless, there are two major categories within attribution modeling—single-touch and multi-touch.

Single-touch attribution refers to when a customer is converted after a single interaction with content. Typically, single-touch attribution models are broken into first-touch or last-touch attribution, referring to what the analyst determines to have been the most meaningful interaction that eventually drove conversion. First-touch emphasizes the importance of the moment the customer enters the marketing funnel, essentially attributing the conversion to this single moment. Last-touch takes on the perspective that the final step in the marketing process was the one that prompted the customer to make the leap.

“The virtue of single-touch attribution is it’s simplicity.”

The virtue of single-touch attribution is its simplicity: It can be implemented with ease and marketers and analysts can point to a singular moment in the marketing process, thereby zeroing in more effectively on these stages in content strategy recommendations. Within single-touch content strategy, it’s easy to emphasize the importance of customer “hooks” that prompt conversion. CTAs, sign-up forms and customer personas all feature heavily into this attribution model and the conversion process is seen as linear, which in turn allows content authors to focus in on these features.

However, one of the pitfalls of single-touch attribution is that, due to its simplicity, there is a significant risk of errors and improper attribution. As Jordan Con of bizible points out, technological limitations – combined with marketing speculation – often lead to conclusions that may not accurately reflect the customer pathway.

“The issue here is that if you are using conversion tracking (e.g. Google Analytics) in a B2B setting, the time between first touch and the conversion can be longer than the common 30- to 90-day expiration on the tracking cookie,” Con wrote. “So often times, this model is really attributing credit to the first touch that’s within the cookie expiration window, and not the true first touch.”

Multi-touch attribution
Recognizing the shortcomings of single-touch attribution models, multi-touch is a more nuanced approach to measuring the efficacy of content. Multi-touch attribution presupposes that there is rarely a single piece of content that drives conversion. Instead, a sustained and multi-step process of encountering content is what will eventually result in conversion. Data from MarketingSherpa suggests that multi-touch attribution models may increase ROI by 22 percent year over year.

The nuance of this approach lends itself to both a variety of different models within the umbrella of multi-touch attribution as well as its likelihood of filtering into a greater content strategy. Rather than presuming that single exposure will be enough to hook a potential customer, multi-touch attribution allows content to be crafted in more subtle ways. Content authors may be better served focusing on branding, thought leadership and the creation of a conversion-encouraging environment instead of leading with a strong CTA or forms.

While neither single-touch or multi-touch models are perfect, they both can inform a content development and management strategy in distinct ways. The close relationship between marketing and authorship means that focusing on determining the most operationally helpful modeling for what is key.

Make a Quantum Shift in Structured Authoring

Eric Kuhnen and Michael Rosinski join Ed Marsh to talk about their presentation at LavaCon, Making a Quantum Shift in Structured Authoring.

According to Eric, one of the key changes in the content industry has become the inability for multiple groups within a department to share content while using a common set of tools. The technical documentation team works with structured content, and the content repository is often not available to those outside the team. Astoria Software now provides integration with Witty Parrot to enable “rich sharing” and ensure that XML-based content is available to non-XML content creators.

Julie Newcome of Ultimate Software, an Astoria Software customer, immediately saw the appeal of the integration:

When we first saw the demo with Witty Parrot, it really excited us. One of the things we have had to overcome is the challenge of sharing content between departments. The benefit for us is that [Astoria with WittyParrot] allows other departments to use vetted content, content that is accurate for a customer-facing audience without having the technical skills to author in DITA, and that’s huge for us.

Ultimate Software is scheduled to go live with their integration of Witty Parrot soon after the LavaCon Conference. You can see a demonstration of their implementation at the Astoria booth; the demonstration includes:

  • Pulling technical content, such as a task or FAQ, from the content repository and sharing it with a customer
  • Generating instructor slides for a training class directly from the source DITA and creating an updated course manual, which is a faster, more efficient, and better managed process

On this Podcast

  • Michael Rosinski: President and CEO of Astoria Software, Inc.
  • Julie Newcome: Content Management Analyst at Ultimate Software.
  • Eric Kuhnen: an expert in product research, development and management.
  • Ed Marsh: Creator and host of the Content Content podcast.


Castaways: Dealing with orphaned content

Even the most robust, expertly maintained content management system will inevitably face the challenge of orphaned content. Regardless of how small or seemingly insignificant the content block may be, when an important piece of your written intellectual property loses its link to its original author, that piece of IP loses its chain of provenance that gave rise to the content in the first place.  The effect is a disrupted chain of linkages, rendering many related content blocks essentially useless and degrading the value of the IP itself.

It takes careful and regular monitoring to avoid orphaned content and the subsequent increase in resources needed to rectify the condition.  Let’s take a look at a few issues surrounding orphaned content, starting with its genesis.

What makes content ‘orphaned’?
Content is orphaned when it loses its link to authorship and, therefore, its link to an authoritative source. This can occur if a CMS user/author account is deleted or updated without the content itself being updated. The content subsequently can become a “problem resource” – disconnected from clear authorship permissions and only able to be updated or deleted by a system administrator.

“You can’t verify the veracity of orphaned content.”

When content is orphaned, it can send ripples through the entire CMS. Every piece of linked content that refers back to data owned by the orphaned content is affected by its change in status, rendering them either broken or unable to be edited since the original author no longer exists in the CMS. This can be a serious problem, particularly if a CMS has significant user turnover or the system purges its authors regularly.


The impact of orphaned content
The challenge that heavily linked orphaned content creates is a considerable one. In addition to the manifold software errors it can prompt, the lack of author roles can undermine the authority of the content. Without author accountability, it becomes impossible to verify the veracity of data underpinning the piece of content – or even whose job it is keep the content updated. Deleting it may only make the matter worse – doing so can further break links in related files and folders.

“As content wranglers accustomed to dealing with orphaned content, we know from firsthand experience that it is unrealistic to rely upon the availability of original authors as the backbone of our quality system,” Robert Norris wrote in The Content Wrangler. “Far too often we’ve been left wondering who is going to fix the problem…and how…and when.”

Repairing orphaned content
To avoid the operational hassle of orphaned content, Norris urges CMS designers to build a mechanism that acts a “self-examination” for a system, combing through content and flagging issues of quality and authorship, and funneling these issues into a repair feed.

“[It] makes sense to assign topical content ownership at the upper-management level to establish accountability with a role that has authority,” Norris said. “Since every resource we publish incurs a burden of maintenance, this principle places that burden on the shoulders of someone with the resources needed to prioritize and execute the task.”

What this means is that orphaned content ideally needs to be repaired rather than purged. As previously stated, regular author turnover means that the task of repairing orphaned content defaults to a system administrator. The best practice, though,  is for the self-examination algorithm to assign ownership to widely accessible dummy account whereby qualified authors can claim ownership and reestablish the chain the provenance.

By taking a tactical, strategic approach and flagging content as problems arise – rather than only discovering a buildup of orphaned content after an audit – CMS managers can ensure their systems are clean and efficient.

How artificial intelligence will help you deliver exceptional customer experiences with content

Artificial intelligence (AI) is at work right now, even if you don’t notice it. While you’re busy creating new marketing campaigns or documenting new products and services, AI systems are busy augmenting your capabilities.

Behind the scenes, AI is working to help you understand the bigger picture by providing an always-on alert system designed to ensure recognition of patterns, trends, threats, and opportunities hidden deep in the data. Content management systems depend on this data to help improve the way content is created, managed, translated, and delivered; that is, the right content to the right people, when, where, and how they need it.

And yet, most content management systems have yet to incorporate AI-powered functionality into their products. But, that’s about to change.

Once you recognize that content is a business asset, you will see the importance of leveraging every ounce of value from each dollar that is spent developing it. But, recognizing a need—and tackling it—are two very different things.

When ready to take action, you’ll likely find yourself in need of a digital transformation; a profound evolution of business activities, (including all things related to producing content) designed to help meet the fast-changing, technology-fueled needs of today, while simultaneously preparing your organization for changes coming tomorrow. AI is certainly going to be part of the mix.

Today, AI is in use in content production departments around the globe. It helps financial service companies automatically generate content that adheres to U.S. government regulations. AI helps news organizations (like the Associated Press) make corporate earnings reports on demand, and at scale, much faster—and more consistently—than its business reporters can. And it helps small businesses compete with much larger competitors by helping them develop capabilities previously limited to Fortune 500 companies.

AI also plays a significant role in content distribution and delivery. Today, with a few commands from the keyboard, intelligent agents can be put to work on your behalf. Intelligent agents can be instructed to generate a website, publish content simultaneously—and at the right time and to the right people—to multiple social media outlets, and provide predictive analytics designed to create relevant content of value to prospects and customers.

As AI matures, expect it to expand into other areas of the content lifecycle. For instance, your content management system can be fed insights generated from predictive analytics that can help guide conversations with customers. AI will serve up content designed to steer your prospects toward relevant content, product, and service offerings.

One of the promises AI will deliver is improvements to content management over time. As machines learn, they adapt and become smarter. AI will enforce the rules set in place to govern the creation, management, translation, and delivery of content. AI-enabled content management systems will identify threats (like incongruent content, bad links, security concerns) and help prevent violations of conventions, rules, regulations, and laws. Properly tuned, AI-augmented content management systems will help spot opportunities to produce new content and assist in determining whether the content created delivers the value expected. Education, business, and government will all benefit from AI-powered content personalization.

The future of artificial intelligence is uncertain in many ways, but one forecast remains clear—all worldwide industries, governments, employees, and retail consumers will be affected.

There exist doomsayers, such as Elon Musk of Tesla, who says he believes that artificial intelligence poses an “existential threat to human civilization”. Some technology leaders are more optimistic, like Facebook’s Mark Zuckerberg, who says artificial intelligence applications will help businesses “build things” that make the world “better.” Zuckerberg says AI promises—over the next 5 to 10 years—to help us develop new operational models.

While I understand the need to be aware of the dangers, I believe harnessing the power of AI cooperatively through partnerships with Fortune 500 companies is our best strategy. An example would be the recent deal announced by AT&T and Oracle that combines AI technology and data analytics to improve AT&T’s field service technician’s workflow—problem discovery, solution efficiency, and overall customer experience. When scheduling an appointment with a telephone installation technician, imagine being given a firm appointment time, rather than a 4-hour appointment time window. This same strategy could be deployed in the content world to build an AI-enabled Component Content Management System with content creation, quality, translation, and distribution interacting in real time.

My advice: Learn everything you can about AI. Seek out opportunities to become involved in projects, both at work and in your free time. In a world in which machine-ready content will play a critical role in your remaining relevant, it’s best to stay ahead of the AI curve.

Michael Rosinski, Astoria Software’s President & CEO, Discusses Augmented Reality: Will It Live Up To The Hype?

There’s a lot of hype about augmented reality (AR) and its impact on content. Lately, it’s getting difficult to avoid. From analyst firms to magazines—and from newspapers to corporate websites and blogs—everyone seems to be writing about how AR will transform the way we live, work, and play.

Virtual reality versus augmented reality

To understand the potential impact that AR may have in the future, it helps to start with a clear understanding of the difference between augmented and virtual reality (VR). AR and VR are related. AR is a distant cousin to VR. The primary difference between the two is rooted in reality.

  • Virtual reality aims to create convincing—yet artificial—computer-generated experiences that feel real. VR experiences are designed to be stimulating, immersive simulations that are made possible with the help of a headset like Facebook’s Oculs Rift.
  • Augmented reality, on the other hand, aims to complement reality by adding a layer of complementary information—something useful or entertaining—on top of reality. It’s live and in real-time. AR makes it possible for us to produce content that can be super-imposed over an image of physical world with the help of a camera-equipped mobile device or specialized headgear. If you’ve watched live television broadcasts of sporting events, chances are you’ve seen AR in action. But, AR’s true value is in the capability it provides users. AR can help consumers make simple repairs to an automobile, learn to cook, and more.

    How Augmented Reality Works

While opportunities to apply AR to business are almost unlimited, opinions about its value vary.

The Future of Computing? Perhaps.

Some exclaim AR is “the future of computing!”. They cite examples of AR’s ability to radically transform education, healthcare, food safety, manufacturing, fashion, retail, and entertainment. A recent report from Forrester says AR’s “immersive digital overlays represent an opportunity to improve customer engagement.”

Gartner predicts that “through 2021, businesses will see a rapid evolution of immersive content and applications that will range from consumer entertainment experiences to optimizing complex work processes.”

Gartner also predicts that by 2020, 10 million consumers will use augmented reality while shopping online. “Immersive technologies such as augmented reality increase user engagement with a product or service by enabling a consumer to fully explore features and conveying additional information that can aid in a buying decision,” Gartner says. “This will drive immersive interfaces, including both augmented and virtual reality, to become the standard customer experience paradigm for scenarios requiring human-to-machine interactions.”

Others are not so easily impressed. The nay-sayers complain that adoption of AR technology is slow; and the bar to entry, too high. Others say what started as a revolutionary technology with amazing promise has been co-opted—and dramatically watered-down—by the tech players like Facebook.

Facebook CEO Mark Zuckerberg, predicts augmented reality will dramatically change the types of content we produce. At its annual F8 developer’s conference in San Jose, CA recently, Zuckerberg laid out his vision for incorporating AR into Facebook. Zuckerberg says he believes the future of augmented reality won’t involve headsets or televisions. Instead, widespread adoption will be driven by smartphones and other mobile devices with cameras.

 Not Everyone Agrees

“The problem is that the acceptable bar for what can, or should, be considered augmented reality is dropping quickly,” reports market researcher, Bob O’Donnell in USA Today. O’Donnell (and others like him) believe the original promise of AR has been diluted and can’t be delivered through a smartphone camera.  (NOTE: see also: Camera Effects Platform)

The question of which technology company will be the big winner in the advancement of augmented reality to a wide audience is not clear yet. Some speculate that Apple (a late entry into the AR space) could easily find itself on top. “If Apple put augmented reality in the iPhone 8’s cameras,” writes Caitlin McGarry in Macworld, “the company would own the full stack: hardware powerful enough to put AR experiences in the palm of your hand without burning through your battery and the software support to entice developers into creating those experiences.”

A Big Market With A Big Opportunity

With support from industry heavyweights like Google, Apple, Microsoft, and Facebook, analysts predict the AR market in the US could reach $50 billion USD (see chart below) in annual sales by 2021. Global growth is expected to reach 90 billion USD annually by the same time. With such a big opportunity for revenue generation, it’s no wonder that nearly every tech brand is looking for a way to grab their share of this promising market.

The question for content producing companies is not should they create AR content, but how will they manage the complex relationships between content assets in a world of layers. Add localization and translation to the mix, and the need for powerful, enterprise-wide content management becomes clear.


Artificial Intelligence and Work: Preparing for the Fourth Industrial Revolution

The Fourth Industrial Revolution

We’re at the beginning of what some analysts call The Fourth Industrial Revolution—sometimes referred to as Industry 4.0—the marriage of advanced manufacturing techniques with artificial intelligence (AI) and the Internet of Things (IoT). The goal is to produce a hyper-efficient, automated, interconnected system capable of communicating, analyzing, and using information to drive progress.

There’s a lot of talk about the impact of Industry 4.0 on jobs and the future of work. It’s a newsworthy topic that has made its way into the daily media cycle, particularly as some investors predict more automation and fewer jobs in the future.

What is AI?

According to the Google dictionary, “Artificial intelligence is the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

AI is not a discrete technology. It’s a constellation of technologies that mimic behaviors and cognitive abilities associated with humans, such as rationalizing, reasoning, problem-solving and learning. Being able to sense, comprehend, and automatically act upon what is learned—without being explicitly programmed to do so—is what makes AI more powerful than traditional computing technologies.

Some of the most popular AI technologies are:

  • Speech Recognition — transforms human speech to text and other machine-readable formats
  • Natural Language Processing and Text Analytics — makes it possible for computers to understand sentence structure, intent, meaning, and sentiment
  • Machine Learning Systems — algorithms that learn and make predictions based on patterns in data
  • Decision Management — automated decision-making engines
  • Virtual Agents — chat bots and more advanced personal assistants like Amazon’s Alexa and Apple’s Siri

AI is with you already. There’s no escape.

No, robots will not organize a digital insurrection and take over the planet—at least, not any time soon. But, AI will usher in dramatic and significant changes to the way we live, work, and play.

Today, no matter where you look, AI is likely to be making a debut appearance. While previously limited to the fictional world of motion pictures, AI is making its way into almost every product, service, and technology imaginable. In fact, if you’re like many people, AI has been part of your life for a few years now:

And, if you own a smartphone, chances are, you have a helpful personal assistant powered by AI at your beck and call. Perhaps the most widely-known personal assistant, Siri, is now available across much of the Apple product line. Siri’s power increases by connecting it to Apple HomeKit, allowing you to communicate with—and control—connected smarthome devices from afar with your voice.

The impact of AI

AI evangelists often tout the potential benefits AI may have on business, education, and humanity as a whole, but some experts worry that it may also introduce significant negative repercussions if allowed to develop unabated. Just imagine the arms race that might occur with the introduction of autonomous weapons.

Fear of mass destruction—or a hostile robot takeover—aside, currently AI is being employed to solve some of the world’s toughest challenges:

In 2016, Gartner ranked AI as its number one strategic technology (for the second year in a row). Google, IBM, Salesforce, Amazon, and Apple have invested significantly in the development, purchase, and acquisition of AI technologies. And, the AI race is on inside global brands. According to research from Narrative Science, 38% of enterprises today report using AI. By 2018, that number is expected to grow to 62%.

With Fortune 1000 companies embracing AI in a major way, it is no surprise that AI will continue to be a growing influence on the technology front impacting jobs, automation, and productivity—all with touchpoints to the political landscape.

Is a Document Management System a half measure?

While the direction of this blog is forward-looking, it is instructive at times to consider the history of technologies and techniques.  One such is document management, and its predecessor, electronic document imaging, both of which are precursors to modern content management.  This is not to say that document management is dead as a technology or a solution; in fact, in some operational circles document management is very much alive and useful.

The earliest document management systems addressed the problem of paper proliferation.  "Electronic document imaging systems" combined document scanning with database-driven storage, indexing, and retrieval to form libraries of what were once reams of paper files. "Document management" became a solution in its own right as vendors added support for digital file formats generated by word processors, spreadsheets, and other office-productivity products. The descendants of those earlier systems are the document-based enterprise content management systems of today, such as Microsoft SharePoint, OpenText Documentum, and Hyland OnBase.

When is DMS useful?
One question to consider: is a document management system (DMS) relevant in the modern world of digital content management? At its core, a DMS knows nothing about the information within a document; that is, users don't link to content within a document managed in a DMS. Instead, users tag whole documents and link one whole document to another whole document; the DMS simply maintains the inter-document links.  Hence, in the context of digitized content, a DMS is something of a half-measure because each document under management exists as a static element.

"DMS is closer to DAM rather than CMS."

This may be sufficient for some organizations and in some applications. If the document itself is significant – either supplementary to or alongside the data it contains – then a DMS represents what could be a supremely useful permutation of content management. For instance, it's one thing to have a database containing the collected works of William Shakespeare intricately tagged and linked via hypertext. It's an entirely different concern, though, to digitize a specific document written in Shakespeare's own hand.

In a way, a DMS is closer in function to that of a digital asset management system rather than that of a content management system, especially in its ability to protect and preserve the original form of a document. A DMS can also be a very low-cost solution given the dozens of open-source document management solutions available today. Enterprises looking to achieve organization and clarity when dealing with large physical archives of documents may choose from a wide variety of free and fee-based DMS solutions. Using existing hardware and software like cloud computing, scanners and simple image editing and management software, an enterprise can digitize its documents without having to build or acquire a more complicated CMS.

The limits of DMS
However, by leaning on a DMS, enterprises may find themselves running up against the lack of sophistication innate to the software. Since the tagged data is essentially referential to the document itself, it is easy to miss valuable insight contained within the document. Documents cannot easily be interrelated with similar content or data recombined into something new.

Enterprises have found value in linking digital asset management with content management, so it's likely that a DMS working in conjunction with CMS is the ideal solution. If the physical document itself – or at least the visual representation of it – is of value, the ability to tag and separate the data within the document while still preserving it in a static form will lead to a more agile, comprehensive information.