Drew Robb, Author at Datamation https://www.datamation.com/author/drew-robb/ Emerging Enterprise Tech Analysis and Products Fri, 01 Sep 2023 20:51:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 Data Warehouse: Key Tool for Big Data https://www.datamation.com/big-data/data-warehouse-key-tool-for-big-data/ Fri, 01 Sep 2023 09:00:00 +0000 http://datamation.com/2018/02/01/data-warehouse-key-tool-for-big-data/

Data warehouses are repositories where large amounts of data can be stored and accessed for reporting, business intelligence (BI), analytics, decision support systems (DSS), research, data mining, and other related activities. While they’re always associated with large amounts of data, data warehouses are not simply about massive storage capacity—rather, they’re about making disparate types of data from many different sources accessible to support decision-making. This article explains the concept of data warehouses, explores their construction and uses and explains how they differ from other types of large-scale data storage.

What is a Data Warehouse?

A data warehouse is a storage architecture to support the retention and access of large amounts of data used for a variety of decision-making purposes. They are optimized to retain and process large amounts of data fed into them via online transactional processing (OLTP)—a type of data processing that executes many concurrent transactions as in online banking, shopping, or text messaging, for example— and other high volume systems. This data can then be used for reporting, search, and analysis.

Data warehouses are designed to ease the function of analytics by bringing together data from disparate sources into a central repository where rapid analysis can be carried out. Otherwise, data scientists and analysts have to extract the data they want to analyze from different sources and bring it into an application for analysis. 

A data warehouse can gather data from many different sources—including traditional relational databases, transactional systems, and large swaths of unstructured data from multiple sources—where it can be accessed by BI, analytics, and artificial intelligence (AI) applications for prediction, decision-making, and evaluation. 

How are Data Warehouses Constructed?

Data warehouses are optimized to deal with large volumes of data. While most are kept in the cloud, some are still kept on mainframe systems and enterprise-class servers. Data from OLTP applications and other sources is extracted for queries and used by analytical applications. 

Data warehouses can be designed to receive and process different types of data, with data volume, frequency, retention periods, and other factors determining the specifics of construction. Business goals and objectives lead the design, which is then focused on collecting, normalizing, and cleaning the relevant data. 

Perhaps the most vital aspect of design is the underlying storage infrastructure. Storage media must be capable of hosting a large quantity of data—if it’s cloud-based, the appropriate storage tier should be chosen to meet the needs, balancing cost, capacity, and price.  Flash media offers the highest performance, for example, but at the highest cost. Hard disk drives (HDDs) offer better capacity for cost, while hybrid flash/HDD solutions can boost performance without breaking the bank, making it possible for analytics systems to move needed data into flash for faster processing.

Some data warehouse architecture is designed primarily to cope with structured data from relational databases. As most modern data warehouses collect and store data from both the cloud and on-premises systems, they must be set up to cope well with both structured and unstructured data like emails, text messages, and multimedia.

]]>
4 Popular Master Data Management Implementation Styles https://www.datamation.com/big-data/master-data-management-implementation-styles/ Tue, 22 Aug 2023 16:42:16 +0000 https://www.datamation.com/?p=24502 Master data management is a technology-enabled discipline to ensure that all shared master data assets are uniform, accurate, and consistent. Master data refers to the identifiers and extended attributes that describe the business’s core entities, including customers, prospects, citizens, suppliers, sites, hierarchies, and accounts—in master data management, business and IT units work closely together to create a single master record for each person, place, or thing in the organization to achieve a single source of trust. This article looks at four of the most popular master data management implementation styles— Registry, Consolidation, Coexistence, and Centralized—and details how they work and when each of them is typically used.

Master Data Management Implementation Styles

While the goal of master data management (MDM) is clear enough, achieving it is easier said than done. Data comes from a wide range of sources, and in many formats. That means there’s always going to be data quality issues and other challenges involved in consolidating, verifying, and aligning an organization’s data.

As a result, there are many different approaches to master data management. Some involve heavy centralization while others take a more federated or distributed mindset. Organizations should select the one that best suits their existing situation, including information architecture, strategic direction, corporate structure, regulatory constraints, and business preferences.

In order of complexity, here are the four most popular master data management implementation styles.

Registry MDM Implementation

The Registry approach is the dominant one among organizations that deal with many disparate data sources, particularly smaller and mid-sized ones. It works by placing data from all of those sources into one central repository where the data can be cleaned, consolidated, and aligned. Matching algorithms are used to identify and remove duplicates.

An advantage of this approach is that the original data isn’t altered—changes are made directly within source systems as opposed to a separate MDM repository. Anyone verifying the truth of data, therefore, can use global identifiers to track it back to the original unaltered source.

For example, an organization might draw data from enterprise resource planning (ERP), customer relationship management (CRM), human resources (HR), accounting, and other systems. In the Registry approach, instead of each system drawing its own sometimes differing conclusions, the MDM plan provides an aligned opinion, prediction, or trend based on all of them. This is a good way to avoid compliance or regulatory repercussions that demand data be preserved in its original form.

While this low-cost approach can be implemented rapidly without much impact on other key applications, there are some drawbacks, primarily reliability and time. Creating a registry hub that can receive, cleanse, and consolidate data from many different sources and types can be time-consuming. Modern platforms can automate these processes as well as the shunting of data and updates from one system to another, making them more feasible.

Consolidation MDM Implementation

The Consolidation style creates what is known as a “golden record” of all organizational data in a single place. Like the Registry style, it brings together data from multiple sources into a hub to develop a single version of truth, but in this approach, a human is involved in verifying accuracy of the golden record and analyzing it for errors. This leads to increased reliability over the Registry approach. In addition, it means the ability to bring experience to bear when evaluating the data, drawing conclusions, and making more informed decisions.

The golden record becomes the primary source of truth in the organization. As it is updated, any changes are pushed out to the original sources—ERP and CRM systems, for example. This is particularly beneficial for organizations that rely heavily on analytics functions, as cleansing, matching, de-duplication and integration functions can then be done in one place.

MDM Consolidation implementations are more expensive than Registry ones, but less expensive than the other types detailed here. The ability to synchronize data with original data sources almost in real time means that users of ERP, CRM, and other mission critical applications are not disadvantaged by long delays in receiving updates from the MDM hub. For these reasons, mid-sized organizations and those with a heavy analytics workload tend to favor this approach, using it to minimize the hassle involved in having multiple silos of information within the enterprise each presenting its own version of the truth.

Coexistence MDM Implementation

The Coexistence style of MDM implementation enables the MDM hub and the original data sources to all coexist fully in real time. Because there is no delay in updating records from one system to another, the golden record remains accurate at all times—as do the related applications that feed the data—leading to efficiency, timeliness, and complete accuracy.

This style is relatively simple for expanding businesses to upgrade to from the Consolidation style, as it takes only minor modifications to link centrally controlled data with their original sources. The benefits of doing so include ease and rapidity of reporting as well as enhanced data management overall.

As long as the central MDM hub and related data sources remain consistent, the golden record is highly unlikely to possess any disparities. Retaining all master data attributes in one place means overall data quality is enhanced, access is faster, and reporting becomes more facile.

Centralized MDM Implementation

The Centralized style of MDM implementation is sometimes also known as the Transaction style. It’s a step up from the others detailed here, as it makes it possible to link, cleanse, match, and enrich the data management algorithms for storing and maintaining all master data attributes. It’s all done centrally and then transmitted to the various sources that originally supplied the data.

In this approach, the centralized master system acts as the central repository. Surrounding systems and applications subscribe to it to receive updates so they remain consistent. This turns the MDM into a full-fledged system of record, which can then act as the primary source for the entire supply chain and customer base. Data creation among suppliers and customers can be done even in highly distributed environments, as it is now established as the system of origin for all information as opposed to being fed system first from other organizational applications.

The master data is always complete and accuracy is assured at all times, and it supports the implementation of advanced security and accessibility policies based upon data attributes—even in organizations with multiple locations, geographies, and domains. This architecture also increases data governance capabilities. As such, it is primarily used in large organizations with stringent data governance policies and deep enough pockets to afford the necessary investment of time and money.

Implementations can be lengthy and are often complicated, requiring a large implementation team, and help from external providers and consultants to execute. In most cases, organizations already have a consolidation or coexistence approach in place before they take the leap into the big leagues with a centralized MDM implementation.

Bottom Line: Choosing an MDM Implementation Style

When it comes to MDM, there’s no one-size-fits-all style that’s right for every business. But the need for the right MDM implementation style and the right data management approach has become increasingly evident following the rise of artificial intelligence and the associated massive capacity for growth.

“A strong foundation for AI necessitates well-organized data stores and workflows,” said Rich Gadomski, head of tape evangelism for Fujifilm Recording Media USA.

As a result, MDM tools are becoming more advanced to serve evolving needs, and now incorporate automated data quality, governance, compliance, and no-code/low-code configurations to meet changing customer needs.

Generally speaking, as the complexity of the data environment increases and the size of the organization expands, the implementation style should become more sophisticated—that means organizations should be moving from Registry to Consolidation to Coexistence and, finally, to Centralized. In some cases, the types of data sources used by the enterprise might mean using two styles simultaneously to lessen complexity, at least at first.

Each business must determine its own specific requirements based on such factors as data quality, access needs, security and privacy necessities, the regulatory environment, existing technology platforms, data types involved, and data governance mandates. By setting the right vision, strategy and policies for data management upfront, the path ahead often becomes clear.

Read next: 7 Data Management Trends: The Future of Data Management

]]>
The 5 Stages of Data Lifecycle Management https://www.datamation.com/big-data/data-lifecycle-phases/ Fri, 18 Aug 2023 21:45:20 +0000 https://www.datamation.com/?p=24495 Data lifecycle management addresses how to gain control of and capitalize upon the vast amounts of data most organizations possess. Enterprises that can break down their organizational silos and intelligently unify and analyze their data are more competitive and more successful than their peers. Accomplishing those goals requires careful organization of the five different phases that comprise the data lifecycle: creation, storage, usage, archiving, and destruction. This article details those stages and gives best practices for each.

What is the Data Lifecycle?

Broadly speaking, data lifecycle management is the discipline of ensuring that data is accessible and usable by those who need it from beginning to end. The data lifecycle itself covers all the stages an organization must pass through in its interaction with data, whether financial, customer-focused, or otherwise. Depending on who you ask, there are either five phases to the data lifecycle or eight. The five-stage cycle is the simpler and more common one:

Creation > Storage > Usage > Archiving > Destruction

The eight-stage cycle is an expansion of two stages of the five-stage cycle. In this model, “collection” and “processing” are part of the Storage phase, while “management,” “analysis,” “visualization,” and “interpretation” are part of the Usage and Archiving phases.

Generation > Collection > Processing > Storage >
Management > Analysis > Visualization > Interpretation

Successfully navigating each stage requires consideration for internal processes and users, infrastructure and technology, external regulators and legal authorities, consumer privacy, and more, making data lifecycle management a complex topic touching on many areas of an enterprise’s work. Let’s look at each stage in more detail. 

Stage One: Data Creation

Because enterprises take in a lot of data, it’s easy to take this stage for granted. But consider this: an organization’s data is created on a wide range of devices across many geographies. To do it right, this stage involves making sure users have the right tools to create data and the right processes in place to ensure that the data can be stored in the appropriate formats and types. 

Essentially, the Creation stage takes the initial data, ensures it can be captured, and is made available to the appropriate storage medium. To move to the next stage—the Storage phase—the data must be processed properly. Metadata should be added to make it searchable, for example, and access and privacy requirements are identified and accounted for. This phase is best done automatically at the metadata layer as the data is fed into the storage media.  

“Properly implemented, metadata acts as a roadmap to give organizations the insights needed to control all of their data and storage resources,” said Tony Cahill, senior solutions architect at StrongBox Data Solutions. “In hybrid and cloud environments, metadata can be used to improve data resilience, and reduce egress charges by targeting specific files.”  

Stage Two: Data Storage

The Storage stage is complex and carries many ramifications for the remainder of the lifecycle. If data is dumped carelessly onto the cloud or disk arrays, for example, it can easily get lost, be hard to manage, or become expensive to retain. There are many options for storage media—cloud, flash, disk, tape, or optical media, for example—but thought needs to be put into finding the right place to keep it, taking into account such factors as cost, accessibility, and the level of performance needed by the applications it serves. 

Security is also a concern in modern storage, which means that data immutability, security, privacy, and storage location must also be considered during this stage, as well as redundancy—to guard against disasters or data breaches, multiple backup copies of the data should be made. Additionally, external rules and regulations may dictate how data is stored. European nations, for example, don’t want data exported outside their borders, and impose harsh penalties for violators. Enterprises working in heavily regulated industries must ensure their data complies with all relevant regulations, including HIPAA, Payment Card Industry (PCI), Sarbanes-Oxley, and any applicable Security Exchange Commission (SEC) rules. 

Organizations should also be focused on internal requirements during this stage. Data stores should be organized so as to support business objectives and business continuity in the event of natural disasters, failures, or malware.

Learn more: Data Sovereignty and Why Does it Matter?

Stage Three: Data Usage

How the data is stored in the Storage stage dramatically affects the Usage stage. Stored data needs to be made available to the users and applications that need it and restricted from those that don’t. Roles must be defined carefully and access rights assigned, but security, privacy, and performance must be balanced so that the burden on users is not so great that they can’t use the data or seek alternate “shadow systems” to avoid it.

The Usage stage also includes making data available for automated reports, dashboards, and analysis, which also means real-time data visualization needs. Analytics may be the most fundamental aspect of modern data usage, with a wide range of applications and artificial intelligence (AI) tools. These apps need access to ever-larger data stores. Enterprise data must be managed so that both leadership and staff have access to the data they need, which requires detailed management of data at every step of the process. 

Stage Four: Archiving

In the Archiving stage, thought must be given to the long-term storage of data. Because of the sheer volume of data in enterprise uses, it is no longer feasible to just retain everything in primary storage, whether that is flash or disk. With flash, prices climb alongside capacities, straining budgets. Even disk storage is expensive in large quantities, forcing businesses to seek a range of media to meet their budgets and needs.

An analysis by Horison Information Strategies highlights the fact that up to 80 percent of data is rarely or never accessed within the first month or two—which means that mission-critical systems are almost never going to request any of that data. The best approach is to retain the 20 percent active-use data on flash or disk and store the remainder in an immutable tape archive. 

To alleviate concerns about how quickly that data could be made available if needed, active archive solutions can provide data from tape to analytics and AI within just a few minutes. Used in combination with tape, these tools can ensure data’s longevity and preserve access while preventing corruption and other retention challenges. 

“New erasure coding algorithms optimized specifically for cold storage will enhance data protection and durability for long-term retention while reducing storage costs significantly vs. multi-copy and cloud-based solutions,” said Tim Sherbak, Quantum’s enterprise products and solutions manager.

Stage Five: Data Destruction

No data should be destroyed before going through the Archiving stage, and a well-managed archive will include provisions to destroy data that has reached its end of life. But the rise of AI and analytics has also given rise to a philosophy demanding that data be retained indefinitely—because who knows when it might prove useful? 

The practicalities of such an approach present challenges. Storing data until the end of the time would be an expensive proposition. One solution might be to summarize old data or submit it to analysis and classification before it is destroyed, providing a record of its key facets without burdening organizations with unwieldy data storage requirements. 

Another point to consider is that the destruction of data can have serious implications. Improperly destroyed data can be a cybersecurity or privacy risk, and data destroyed prematurely can be a compliance violation. So can data retained for too long, while also having cost ramifications. This means that the Destruction stage sounds simple enough, but in practice, requires careful consideration. Enterprises will need to take their own internal needs into account and weigh them against external and legal requirements.

The Benefits of Data Lifecycle Management 

The key benefits of incorporating data lifecycle management into an enterprise are numerous, but generally fall into three areas.

Analysis

By bringing data out of silos and making it accessible to analytics and artificial intelligence systems, organizations glean a great many more insights than would otherwise be possible. This can have an impact on everything from reporting and real-time monitoring to customer engagement and competitive intelligence.

Governance

A system must be in place to look after data in accordance with the best interests of users, shareholders, and the organization as a whole. This ensures that data is processed and available wherever and whenever it is needed and plays a crucial role in compliance. 

Data Protection

The correct processes and data lifecycle technologies should provide sufficient cybersecurity and privacy safeguards and prevent data being lost due to errors such as lack of backup, corruption, or theft. 

Bottom Line: Managing the Data Lifecycle Stages

No matter how much thought and planning goes into data lifecycle management, errors will be made, and adjustments will be needed. Data lifecycle management is an ongoing process, not a one-and-done. To successfully manage data throughout its lifecycle, enterprises should listen to users—those who work with the data day in and day out. If they complain that valuable data is being ignored, certain types of data need to be retained longer, or that privacy and security hurdles have hindered performance, adjustments may be needed. External sources—regulatory bodies and legal authorities, for example—need also be taken into consideration. Constant monitoring and improvement of all data lifecycle processes can eventually produce a successful model for all concerned.  

Read next: Data Management: Types and Challenges

 

]]>
What is a Data Lake? https://www.datamation.com/big-data/data-lake/ Fri, 21 Jul 2023 16:40:00 +0000 http://datamation.com/2017/06/22/data-lake/ A data lake is a centralized location where very large amounts of structured and unstructured data are stored. Its ability to scale is one of the primary differences between a data lake and a data warehouse. Data warehouses can be big, but data lakes are often used to centralize the information that was formerly retained within a great many data warehouses. Data lakes can simply function as large storage repositories, but their main use case is to serve as a place where data is fed, and where it can be used for analytics as a way to guide better decisions.

This article looks at data lakes in detail and explores their benefits and basic architecture.

What Is A Data Lake?

A data lake is not so much a physical object as it is a description of several things working together—essentially, it’s a conceptual way of thinking about a collection of storage instances of various data assets stored in a near-exact or exact copy of the source format. In short, they’re storage repositories that hold a vast amount of raw data in its native format until it is processed.

Unlike the more-structured data warehouse, which uses hierarchical data structures like folders, rows and columns, a data lake typically uses a flat file structure that preserves the original structure of the data as it was input. But this doesn’t mean the data lake makes the data warehouse obsolete. Each has its own place.

A data warehouse is essentially a massive relational database—its architecture is optimized for the analysis of relational data from business applications and transactional systems, like major financial systems. SQL queries are generally used to provide organizational data for use in reporting and analysis of key metrics. As such the data must be cleaned and sometimes enriched to act as the organization’s “single source of truth.”

A data lake, on the other hand, is more facile. As well as relational data, it can store a great many forms of non-relational data. Depending on the organization, this might be social media feeds, user files, information from a variety of mobile apps and metrics and data from Internet of Things (IoT) devices and sensors. As data structure and schema are not so rigidly defined as in a data warehouse, the data lake can deal with all kinds of queries. Beyond SQL queries, they are also comfortable with questions from big data analytics systems, full text search, machine learning (ML) and artificial intelligence (AI) systems.

To achieve this, each data element in a data lake is assigned a unique identifier and tagged with a set of extended metadata tags. When someone performs a business query based on a certain metadata, all of the tagged data is then analyzed for the query or question.

The rise of data lakes is being driven by the increasingly massive amounts of data enterprises are collecting and analyzing and the need for someplace to store it.

“The historical storage medium was a relational database, but these technologies just don’t work well for all these data fragments we’re collecting from all over the place,” said Avi Perez, CTO of BI and analytics software vendor Pyramid Analytics. “They’re too structured, too expensive, and they typically require an enormous amount of prior setup.”

Data lakes are more forgiving, more affordable, and can accommodate unstructured data. However, the flip side of the ability to store that much data is that they can become cluttered as everything is dumped inside them. Some call this the “data graveyard effect,” because the data becomes inaccessible and unusable—there’s too much of it, and there is a lack of differentiation to determine what data has real value in analysis.

A data lake must be scalable to meet the demands of rapidly expanding data storage.

Benefits Of Data Lakes

The data lake is a response to the challenge of massive data inflow. Internet data, sensor data, machine data, IoT data all comes in many forms and from many sources, and as fast as servers are these days, not everything can be processed in real time. Here are some of the main benefits of data lakes:

  1. Original data. The volume, variety, and velocity of data makes things easy to miss when it. Storing data from multiple sources and multiple formats in the data lake provides the option to go back later and look more closely.
  2. Easy analysis. Because the data is unstructured, you can apply any analytics or schema when you need to do your analysis. With a data warehouse, the data is preprocessed—if you want to do a search or type of query that the data wasn’t prepared for, you might have to start all over again in terms of processing, if you can at all.
  3. Availability. The data is available to anyone in the organization. Something stored in a data warehouse might be only accessible to the business analysts.
  4. Business performance. According to research from Aberdeen Group, those implementing a data lake gain 9 percent in revenue growth compared to their peers because they were able to detect trends quicker and guide business decision-making with more accuracy.
  5. Scalability. Data can be collected from multiple sources at scale without the need to define data structures, schema, or transformations.

Data Lake Architecture

Data lakes have a deep end and shallow end, according to Gartner—the deep end is for data scientists and engineers who know how to manipulate and massage the data, and the shallow end is for more general users doing less specific searches.

No special hardware is needed to build a data lake—its storage mechanism is a flat file system. You could use a mainframe, and move the data to other servers for processing, but most data lakes are more likely to be built upon the Hadoop File System, a distributed, scale-out file system that supports faster processing of large data sets.

There needs to be some kind of structure, or order, and the data needs to have a timeliness quality—when users need immediate access, they can get it. It must also be flexible enough to give users their choice of tools to process and analyze the data. There must be some integrity and quality to the data, because the old adage about garbage-in, garbage-out applies here. Finally, it must be easily searchable.

Experts recommend multiple tiers that start with the source data, or the flat file repository. Other tiers include the ingestion tier, where data is input based on the query, the unified operations tier where it is processed, the insights tier where the answers are found, and the action tier, where decisions and actions are made on the findings.

Building A Data Lake

While data lakes are structurally more open than data warehouses, users are advised to build zones for different data to quarantine its cleanliness. To catalog everything in the lake, you have to group and organize it based on the cleanliness of the data and how mature that data might be.

Some data architects recommend four zones. The first is completely raw data, unfiltered and unexamined. Second is the ingestion zone, where early standardization around categories is done—does it fit into finance, security, or customer information, for example? Third is data that’s ready for exploration. Finally, the consumption layer—this is the closest match to a data warehouse, with a defined schema and clear attributes.

Between all of these zones is some kind of ingestion and transformation on the data. While this allows for a more freewheeling method of data processing, it can also get expensive if you have to reprocess the data every time you use it. Generally speaking, you will pay less if you define it up front because a lot has to do with how you organize the info in your data lake. There is a cost involved in repartitioning data.

Bottom Line: Data Lakes

The growing volume of data has created a demand for a better means of storing and accessing it. The simple database evolved into the data warehouse, and then the data lake. Tools for data lake preparation and processing generally take two forms: those released by traditional business intelligence (BI) and data warehousing vendors who have added the data warehouse to their product line, and those from startups and open source projects where much of the early data warehouse technology originated. Many of the larger companies including Amazon, Microsoft, Google, Oracle, and IBM offer data lake tools, and enterprises already well invested in technology from these providers will find a variety of tools for data ingestion, transformation, examination, and reporting. Some data lakes are available for on-premises deployment, others are cloud-based services. Now a mature technology, data lakes do more than just provide a repository for massive amounts of data—they also facilitate its analysis, meaning that organizations with data lakes are able to efficiently manage and analyze larger volumes of data to aid in decision making.

Read next: Top 15 Data Warehouse Tools for 2023

]]>
4 Ways to Use Database Snapshots https://www.datamation.com/big-data/database-snapshots/ Thu, 13 Jul 2023 16:59:32 +0000 https://www.datamation.com/?p=24381 Developing scalable enterprise applications on Oracle, SQL Server, MySQL, and other databases requires efficient ways to protect and store data. Developers need to make database replicas quickly for production and dev/test scenarios as well as ways to reduce data, manage it more easily, and increase storage efficiency. Storage snapshots meet many of these demands.

Instead of copying all the data in the database, a snapshot is a fast point-in-time copy of metadata that acts as a pointer to the underlying raw data map. Think of it like a table of contents that shows which files existed and the blocks where they were stored.

Here are four ways to use snapshots to boost storage efficiency, streamline database management, and increase security and reliability.

Using Snapshots for Data Recovery

Standard backups of huge databases and storage repositories can take the technology equivalent of an eternity. Snapshots eliminate the wait when backing up databases, and work just as quickly to recover the data.

Anthony Nocentino, principal field solutions architect at Pure Storage, suggested that organizations with millions of files use snapshots to backup and recover their most vital primary data and rely on backups for everything else, giving them access to the most urgently needed files. Nocentino stressed following the longstanding best practice for backups known as the 3-2-1 system, in which organizations retain three copies of their data. The primary copy is kept on one type of media, a secondary copy on a different media—for example, one copy kept in the cloud and another on offsite tape—and a third copy kept offsite.

Learn more about database trends.

Using Snapshots for Data Security

With the surge of ransomware attacks, companies need all the help they can get when it comes to shoring up cyberdefenses. In addition to using endpoint protection and other safeguards, enterprises can augment their defenses by using snapshots to protect their storage systems.

When used with intelligent file indexing, snapshots make files easily referenceable for version tracking and recoverability. Gartner analyst Jerry Rozeman recommended immutable snapshots—data snapshots that cannot be changed or altered in any way—as a way to aid recovery and provide additional protection from ransomware.

While snapshots are naturally immutable, attackers can delete them. Vendors are adding immutability features that provide protection against such efforts by making it impossible to delete data snapshots within an administrator-specified time frame. For example, snapshots can be scheduled to copy data every two hours and retain the contents for two weeks. During those two weeks, snapshots cannot be deleted.

Learn about vulnerability scanners for cybersecurity.

Using Snapshots for Data Migration

The cloud’s dominance has created a growing need to migrate lots of data—from on-premises solutions to the cloud, from one application or platform to another, and from cloud to cloud. Occasionally the flow must be reversed. For example, for data recovery, or when a company decides to repatriate cloud data on-premises, vast amounts of data must be moved around.

The traditional way to do this is using backups. But this approach ties up central processing unit (CPU), memory, and networking resources, which makes scheduling migrations difficult. If done during production hours, users might experience sluggish or unavailable systems, and done after hours, migrations can last for days or even weeks.

Snapshots provide a smoother alternative. By taking a snapshot of data before migrating to a new platform, companies have a reliable fallback if something goes wrong or data is corrupted. Everything can be rolled back to before the start of the data move. Snapshots enable organizations to seamlessly move their data externally, to the cloud or between cloud vendors, as well, shortening downtimes and increasing reliability.

Learn about the difference between data migration and ETL.

Using Snapshots for DevOps and Testing

Development and testing are all about creating things rapidly, testing them, and then moving on by rolling the environment back to its original state. Snapshots offer the velocity and performance required for this fast-paced workflow.

Database or application development is sometimes done on-premises and sometimes in the cloud, and data may need to be moved from one location to another. Snapshots provide developers with the tools they need to not get bogged down in data movement and environment resets.

Learn more about the top DevOps tools.

Bottom Line: Database Storage Snapshots

With data growing at a rate of 20-30 percent annually in many organizations, storage managers and database managers can easily become mired down by the demands of data storage, movement, migration, and archiving. Multiple approaches must be used. Snapshots offer many advantages, but they must be used in conjunction with other tools such as backup and replication. They can improve efficiency, increase security and reliability, and streamline migrations when used as part of an overall storage strategy.

Read next: Top 6 Database Challenges and Solutions

]]>
Top 5 Current Database Trends https://www.datamation.com/cloud/current-database-trends/ Tue, 11 Jul 2023 14:25:04 +0000 https://www.datamation.com/?p=21543 With more data created in the last couple years than in the rest of human history combined, the need to manage, manipulate, and secure it has never been more critical. Databases have evolved to keep pace with the growing need, changing to accommodate new ways of gathering and using information or becoming outdated and going the way of the floppy disk. Their future looks even more turbulent as new technologies and ways of interacting with data come into play.

This article outlines five current database trends that explain the booming market for them and offer some idea about what to expect as they continue to evolve with changing technology.

1. Old Guard Losing Out to Cloud DBs

Not so long ago, Oracle, IBM, SAP, Teradata, and Software AG were the bigwigs of the database world. They all began life as on-premises systems and all have attempted to transition to the cloud, with varying degrees of success. However, cloud-based databases have largely taken over and cloud-native databases dominate the market. Microsoft is now the leader, with Amazon Web Services (AWS), Google Cloud Platform (GCP), and Alibaba Cloud close behind. Oracle, IBM, and SAP retain a large slice of the market after a painful transition to cloud-based systems, but cloud is king without question.

Learn more about cloud vs. on-premises architecture.

2. Artificial Intelligence in Databases

On average, database administrators (DBAs) spend 90 percent of their time on maintenance tasks, according to Oracle’s Cloud Business Group surveys. AI is being added to database management as a way to greatly lower the maintenance burden. When well-integrated with databases and their underlying infrastructure, AI helps DBAs spot storage and memory bottlenecks and other issues that inhibit database operations.

3. In-Memory Databases

Today’s mission-critical software solutions require minimal database latency for optimal performance. Unfortunately, traditional database management systems (DBMS) rely on sluggish disk read/write operations for storing data on media (e.g., hard disk drives). For this reason, in-memory databases—databases that store entire datasets in read only memory (RAM)—have become strong alternatives for these critical use cases. Records stored and retrieved directly to and from RAM make faster, more reliable performance possible. Additionally, popular solutions such as Redis—an in-memory data structure store—make it possible for databases to support more data structure types and custom access patterns, allowing for the simplification of software code without data structure conversion or serialization.

4. All-Flash Databases

Memory-based databases are great, but can be very expensive. All-flash arrays provide similar performance at a better price, while also providing a lot more capacity. As a result, more databases now run inside all-flash arrays than on in-memory systems. An example of this is JP Morgan Chase, which was seeing a 30 percent increase or more in data storage needs annually. Greg Johnson, executive director of Global Electronic Trading Services, transitioned from disk-based systems to all-flash arrays to provide the capacity and speed his databases need for transactional and other mission-critical systems. “The combination of all-flash and AI has helped us to approve over 200 million credit card transactions that would have otherwise been declined,” Johnson said.

5. Stronger Database Security Layers

With cyber attacks and data breaches continuing to dominate headlines in the technology world, more focus has been placed on securing the data layer of the software application. In turn, more vendors are augmenting their offerings with stronger built-in security features. Oracle now integrates always-on encryption and automated patching at the database level, for example, while Amazon RDS includes a built-in firewall for rules-based database access. Similarly, database users need far more safeguards related to privacy, data residency, sovereignty, and localization, and DBAs must pay attention to where data is stored and where it is going. Vendors are now introducing location-tracking features into their storage arrays and databases to make it possible to verify compliance.

Learn more about big data security.

Bottom Line: Database Trends

Most databases fall into one of two categories, relational database management systems (RDBMS) and unstructured/special application databases. RDBMS have been around since the 1970s and consist of related tables made up of rows and columns. They’re manipulated using structured query language (SQL), the de-facto standard language for performing create, read, update, and delete (CRUD) functions. This is the dominant database type for enterprise computing.

The advent of the cloud saw data processing capabilities scale horizontally like never before. This happened just in time to support the increase in data generated by the internet—both structured and unstructured. But as unstructured data became increasingly common, a need for a new database paradigm led to the creation of NoSQL, a broad category of databases that do not use SQL as their main language. Because NoSQL databases have no set requirements in terms of schemas or structure, they are ideal for software environments based on DevOps toolsets and continuous improvement/continuous delivery (CI/CD) pipelines.

Technologies come and go, and databases are no different. Early DBAs cut their teeth on Informix, SQL server, and Oracle database management systems, while the next generation favored the simplicity of open-source MySQL/LAMP stack and PostgreSQL. Current DevOps workflows benefit from the unstructured agility of NoSQL databases like MongoDB and DynamoDB.

Where databases go from here will depend upon a number of factors, including technology and market innovations, but the need for them will only continue to increase.

Read next: Top 6 Database Challenges and Solutions

]]>
Top 6 Database Challenges and Solutions https://www.datamation.com/big-data/top-database-challenges/ Fri, 23 Jun 2023 15:48:33 +0000 https://www.datamation.com/?p=24312 Database administrators and data architects can encounter a number of challenges when administering systems with different requirements and behavioral patterns. At the June 2023 Pure//Accelerate conference, Pure Storage’s Principal Solutions Manager Andrew Sillifant laid out six of the most common database challenges and his solutions for them.

1. Managing Scale within Cost Constraints

According to Statista, the volume of data and information created is increasing by about 19 percent each year, while others report storage growth figures far in excess of that amount.

“We are seeing data grow at 30 percent or more annually,” said Greg Johnson, Executive Director of Global Electronic Trading Services at JP Morgan Chase. “We hit the wall and were unable to keep up with traditional storage.”

To gain the speed and efficiency necessary to keep up with data expansion, the company switched to all-flash storage arrays that can scale out or scale up as demand requires.

“Power critical applications with latencies as low as 150 microseconds are best served by flash storage,” Sillifant said. “Always-on deduplication and compression features can enable more databases to run on fewer platforms.”

Sillifant said the Pure Storage Flash Array and FlashBlade arrays provide such benefits. Some are best for top performance and others have been engineered for storage managers to cram more capacity onto a smaller space while still providing good performance, while scale-out file and object platforms are best for demanding high-bandwidth, high-capacity use cases.

2. Maintaining Consistent Performance

Oracle’s Cloud Business Group data reveals that database administrators (DBA) spend an average of 90 percent of their time on maintenance tasks. The best solution to reducing the maintenance burden is to improve reporting and support it with analytics and artificial intelligence (AI) so it is easier to discover storage or other bottlenecks inhibiting database operations.

3. Data Protection and Availability

Data protection, disaster recovery, and maintaining high availability for databases are persistent issues DBAs are facing. According to the Uptime Institute, 80 percent of data center managers and operators have experienced some type of outage in the past three years.

To boost data protection and disaster recovery, Sillifant recommended volume and filesystem snapshots that can serve as point-in-time images of database contents. For immutability, Pure Storage SafeMode snapshots give additional policy-driven protection to ensure that storage objects cannot be deleted. Another safeguard is continuous replication of volumes across longer distances and symmetric active/active bidirectional replication to achieve high availability.

4. Management of Data Pipelines

As data sources grow, so do the processes that support them. DBAs wrestle with complexity that makes management a chore. DBAs and storage managers need as many metrics as possible to be able to cut through this complexity and efficiently manage their data pipelines.

Some of these are provided by vendors such as Splunk and Oracle. Others are included within storage arrays. Pure, for example, has OpenMetrics exporters for its FlashArray and FlashBlade systems that allow IT staff to build common dashboards for multiple personas using off-the-shelf tools like Prometheus and Grafana.

With containers growing so much in popularity, DBAs and storage managers also need tools to measure and manage their containerized assets and databases.

“If database queries are running slowly, for example, database personnel typically have no idea what is happening in the storage layer and vice versa,” said Sillifant. “There has traditionally been a lack of insight into each other’s worlds.”

He suggested Portworx Kubernetes storage to address the problems inherent in monitoring data within containers and being able to share information and resolve issues. Metrics can be gathered from a number of layers (including the storage volume layer) and collated into a single reporting dashboard for data within containers.

“You can build common dashboards for databases and storage to correlate behavior and determine where problems lie,” said Sillifant. “Every time you solve such problems rapidly, you make the data more valuable to the business.”

5. Data Control

Organizations handling international data or dealing within specific geographies such as the European Union, California, or New Zealand must ensure that it is not placed at risk by sharing it across borders. Data residency, sovereignty, and localization have become more important than ever, each of which come under the heading of control of data. Whether it is in the cloud or on-premises, DBAs must pay attention to where data is stored and where it is going.

The solution in this case is granular and accurate location tracking of all data as to where it is being stored and in what volumes. Those dealing with reporting and audits can then verify easily that data privacy policies are being observed and data isn’t straying from where it is supposed to reside.

6. Data Migration

According to estimates, it can take anywhere from six to 24 months to set up and configure complex server architectures and cloud-native services when huge amounts of storage are involved. Migrating data from one database or server or cloud to another often eats up much of this time. When a large volume of data is involved, get ready for long migration delays.

Many of the features noted here help ease the data migration burden. Asynchronous replication and snapshots simplify the process of moving data from on-premises to the cloud and back. Snapshots eliminate the hours or even days needed to transfer the data from large databases and storage volumes to another location. Sillifant recommended Portworx for end-to-end data management of containers, which includes the ability to move their data from anywhere to anywhere.

Modern Databases Need Modern Platforms

Modern databases are generally larger and more complex than ever. They must be able to exist in or interface with on-premises, multi-cloud, and hybrid environments. To do so efficiently and well, they must be supported by storage platforms and tools that offer the speed, agility and flexibility needed to keep up with the pace of modern business.

]]>
Top Knowledge Management Systems for 2023 https://www.datamation.com/trends/top-knowledge-management-systems Tue, 20 Jun 2023 19:58:56 +0000 https://www.datamation.com/?p=24296 Knowledge management (KM) systems are used to identify, organize, store, and disseminate information within an organization. Because they gather and collect organizational knowhow, skill, and technology and make it easily accessible from a centralized place—both within and outside an organization—knowledge management systems have broad utility for many aspects of work.

One area in which they are especially useful is customer service, where they can improve the accuracy and efficiency of call center and help desk personnel, facilitate customer self-service, and speed up everything from employee training to problem-solving and information recovery.

Organizations looking to implement knowledge management for customer service or other uses have a number of options from which to choose. While budget will play a part in an software selection decision, it’s just one of many factors to consider, and this guide ranks the best knowledge management systems by use case to help you see how they compare to your own particular needs.

  • Best for Collaboration: Confluence
  • Best for Multi-Channel: ZenDesk for Service
  • Best for SMBs: Zoho
  • Best for Self Service: Jira
  • Best for Sales and CRM Integration: Salesforce
  • Best for Agent Assistance: KMS Lighthouse
  • Best for Customer Engagement: Verint

Top Knowledge Management Software at a Glance

Knowledge management software is very much in demand, with Gartner reporting that 74 percent of customer service and support leaders have set a priority of improving knowledge and content delivery to customers and employees. The recent boom in artificial intelligence (AI) is affecting this market, like so many others, with systems that incorporate AI features and chatbots becoming increasingly popular.

Each of the top systems takes a slightly different approach to knowledge management, offering a mix of features and benefits. Here’s a quick look at how they compare.

Cloud-based Multi-Channel AI Chat Help Desk Pricing per user per month
Confluence Yes Yes No No $5-$10
ZenDesk Yes Yes Yes Yes Starts at $49
Zoho Yes Yes Yes Yes $12 to $25
Jira Yes Yes Yes Yes $47
Salesforce Yes Yes Yes No $25 to $300
KMS Lighthouse Yes No Yes No From $25
Verint Yes Yes Yes No Not available

 

Jump to:

Atlassian icon

Confluence

Best for Collaboration

Atlassian’s Confluence is all about content collaboration across Android, iOS, Linux, and Windows devices. This cloud-based system enables companies to publish, organize, and access knowledge from a single place, and is especially well-suited to helping organizations collaborate on knowledgebase data across multiple channels.

Features

  • Works across multiple channels on Android, iOS, Linux, and Windows devices
  • Cloud-based
  • Lets users create documents, publish, organize, and access knowledge from a single place
  • Collaboration features include feedback on new documents, keeping track of versions, sharing documents, exporting PDFs, and copy/pasting images
  • Includes project management and Jira integration

Pros

  • Can collaborate with Asana, Slack, Miro Board, Google Sheets, and other tools
  • Good ease of use
  • Enterprise-grade permission handling

Cons

  • Lack of a flowchart builder
  • Dated user interface
  • Lack of Microsoft Teams integration

Pricing

Confluence costs $5.75 per user, per month for the standard version and $11 for premium. The price goes down by almost half after 1,000 licenses. A free “lite” version for up to 10 users lacks enterprise features and includes just 2 GB of storage.

Zendesk icon

ZenDesk for Service

Best for Multi-Channel

Zendesk for Service provides an open, flexible platform designed to enable customer self-service. It helps organizations provide personalized documentation across any channel, can scale to the large-enterprise size, and has an integrated Help Desk ticketing system.

Features

  • Users can interact via phone, email, chat, and social media
  • Easy to implement, use, and scale
  • Integrated ticketing system
  • Includes AI and automation for faster issue-resolution
  • Facilitates customer self-service

 Pros

  • Offers a unified workspace with a contextual interface
  • Omnichannel support

 Cons

  • Can be too complex to use for SMBs
  • Expensive
  • Can be difficult to integrate, especially for small businesses

Pricing

ZenDesk starts at $49 per user, per month. For the self-service customer portal, AI, customizable tickets, and multilingual support, the price rises to $79. The professional version at $99 also includes a live agent activity dashboard, integrated community forums, private conversation threads, and more.

Zoho icon

Zoho

Best for SMBs

Zoho Desk can manage all customer support activities and is context aware. It has integrated Voice over IP (VoIP) features and comes with analytics and AI tools as well as a ticketing system, making it a good choice for SMBs and mid-sized enterprises.

Features

  • iOS and Android compatible
  • Provides features for interacting with agents through VoIP and social media
  • Agent, manager, and customer-specific features
  • Includes a ticketing system
  • Strong reporting capabilities
  • Tracks customer requests across channels

Pros

  • Cloud-based system is easy to use and makes ticket-tracking easy
  • Users can manage tickets and everything else in one place
  • Includes AI-based chat and analytics

Cons

  • Not designed for large enterprises
  • Some customization and integration limitations

Pricing

Zoho is free for up to three users. The Professional plan costs $12 per user, per month, and the Enterprise plan costs $25 per user, per month.

Atlassian icon

Jira

Best for Self Service

Jira Service Management is a tool for self-service knowledge management for employees and customers. It helps trace knowledge usage frequency and can identify content gaps and flawed articles. AI-powered search is available as well as good editing and formatting capabilities.

Features

  • Tracks document changes, incident runbooks, and playbooks so teams can continuously learn and improve
  • Helps monitor knowledge usage to identify content gaps, optimize articles, and see which articles deflect the most requests
  • Provides a federated knowledge base

 Pros

  • Self-service management of knowledge articles
  • Provide companies and employees with relevant articles quickly
  • AI-powered search that surfaces relevant knowledge articles

Cons

  • Knowledge management is one facet of a much larger suite; may not be suitable for people who only need knowledge management.

Pricing

Jira Service Management is free for up to three users. Its premium plan starts at $47 per user, per month. A custom enterprise plan is also available.

Salesforce icon

Salesforce Service Cloud

Best for Sales and CRM integration

Salesforce Service Cloud is part of the vast Salesforce universe. Its aim is to help customers find answers quickly across any channel, which it accomplishes by empowering agents with the best answers to questions. This multichannel solution also incorporates AI.

Features

  • Centralized knowledgebase for all agent and customer information
  • Uses analytics to identify which knowledge articles are working and to identify new articles that need to be created
  • Automatically suggests articles for conversations
  • Can share across multiple channels
  • Can embed knowledge articles into a website, portal, community, and mobile app

Pros

  • Can quickly deliver answers to customers by adding the knowledgebase to the Salesforce agent workspace
  • Integrates fully with Salesforce customer relationship management (CRM)
  • Uses AI chat bots to recommend articles
  • Integrated computer telephony capabilities

Cons

  • May be too much for companies that just want knowledge management, as it contains case management, service console, service contracts, computer telephony integration, web services, and more.

Pricing

Salesforce Service Cloud only provides knowledge management in the starter ($25 per user, per month) and unlimited ($300 per user per month) versions.

KMS Lighthouse icon

KMS Lighthouse

Best for Agent Assistance

KMS Lighthouse is all about knowledge management, and seeks to improve first-interaction resolution by intelligently directing agents to the right answer and reduce call center operational costs.

Features

  • Built-in intelligence can cut agent training time in half to onboard agents and employees
  • Lighthouse call center knowledgebase serves as a “single point of truth” to help call center agents speed up calls and avoid inaccuracies
  • Lighthouse Chat enables agents to communicate and collaborate with knowledge-sharing via instant messaging and links to articles and relevant content

Pros

  • AI provides instant responses to agents and customers during search
  • Can function like a personal assistant to answer on-the-job questions
  • Makes all product/service knowledge easy to tap into and compare to help with upselling and cross-selling

Cons

  • Integration can be a challenge
  • Needs better reporting

Pricing

KMS Lighthouse starts at $25 per user, per month.

Verint icon

Verint

Best for Customer Engagement

Verint Knowledge Management integrates across business operations with self-service contact center capabilities designed to help staff engage better with customers. Automated knowledge is embedded directly in tools and workflows.

Features

  • Uses context from customer history to personalize results, resulting in the right knowledge appearing with little to no searching.
  • Helps agents find answers via search using everyday language

Pros

  • Guides decision trees help to resolve complex issues
  • Helps agents understand what customers are looking for
  • New content is automatically analyzed and optimized for search, removing the burden of manual tagging and linking

Cons

  • Vendor is not transparent about pricing models
  • Customer reviews say it is expensive

Pricing

Verint does not publicize its pricing models.

Key Features of Knowledge Management Software

While each platform takes a slightly different approach to knowledge management, all of the systems in this article share some common features.

Cloud-based

Knowledge management repositories should include all of the business’s articles and sources of knowledge, but locking it all on-premises can be limiting. Cloud-based systems integrate with other systems more easily and can better facilitate search and sharing among users and customers.

Multichannel

Knowledge management software should make it easy to collaborate across multiple channels, such as phone, email, chat, social media or other channels. Information should be always accessible, anywhere, on any channel, on tablets and mobile devices, and on PCs and laptops.

AI Chat

AI is being incorporated into a great many tools and IT systems, and knowledge management is no exception. Its best use case is in chatbots that provide users and agents with answers to questions, summarize information, and provide sales data.

Help Desk

Knowledge management systems can be tightly integrated with a help desk as well as with customer contact center systems, though not all users need this functionality, making it a selection point to narrow down choices when considering systems.

Price

Generally speaking, the more features and capabilities a knowledge management package includes, the higher the cost. Lower costs systems may suffice for organizations that need limited features. Those that need enterprise capabilities, help desk integration, and advanced AI and should expect to pay more.

Knowledge Management System Benefits

A knowledge management system can benefit a business in a number of ways. Here are a few of the most common:

  • Provides all enterprise knowledge in one place
  • Offers powerful search capabilities to find information quickly
  • Helps customer service agents answer customer questions
  • Lets customers access knowledgebase for self-service
  • Makes it easy and fast to update information
  • Improves both accuracy and consistency
  • Helps with training new employees

Methodology

The items on this were chosen based on analyst evaluations, user reviews, and assessment of a wide range of lists suggested by knowledge management experts.

Bottom Line: Top Knowledge Management Systems

While knowledge management systems have broad utility for many aspects of an organization’s work, they can be especially useful to help reduce costs of customer service, facilitate self-service, and speed up everything from employee training to problem-solving and information recovery. Organizations should select knowledge management software based rigidly on their specific business needs. Some need all the bells and whistles that come with enterprise-class systems, such as scalability, help desk integrations, and more, while others will only need specific knowledge management functionality. Choose the system that best meets your specific needs without charging for unnecessary features.

]]>
How to Use a Knowledge Management System to Improve Customer Service https://www.datamation.com/trends/use-knowledge-management-to-improve-customer-service/ Tue, 30 May 2023 18:24:52 +0000 https://www.datamation.com/?p=24212 A knowledge management system (KM) could be defined as any system that identifies, organizes, stores, and disseminates information within an organization to make it easily accessible and usable. Whether a single, purpose-designed tool or a collection of integrated systems, a knowledge management system can provide value to an organization in a wide variety of ways.

One common business use is to improve customer service. In this context, a knowledge management system makes it easy to provide relevant and personalized information to customers and the staff who support them. This article looks at specific ways a business can use knowledge management systems to improve their customer service.

Eliminate Silos by Sharing Knowledge

A knowledge management system can help a business break down information silos that prevent different parts of the organization from having access to relevant information or being able to see more holistic views of customers and their interactions.

For example, information in the customer database is not available to the analytics system, or management collects sales data that is not made available to front line workers that spend their days contacting customers.

A knowledge management system implemented in a call center or customer service setting can eliminate these information silos using the following best practices:

  • Consolidate knowledge repositories. Implementing systems that make it possible to unify knowledge repositories and databases will help keep all relevant information in a single system accessible by all.
  • Adopt federated search. Consolidating data and providing federated search tools make it possible for front-line staff to search all data sources based on one query.
  • Design systems from the point of service backwards. A customer-first approach will help ensure all customer data is available at each stage of their interaction with the company.

The easier it is for staff to find customer information, the easier it will be for them to provide high quality call responses and overall customer service.

Provide Consistent Information Across Channels

Call centers can no longer rely on a phone line for customer service. In this multi-channel world, customers looking for support expect online knowledge bases, social media access, chat tools, and more. This can pose challenges for organizations looking to provide consistent information that is optimized for viewing across all channels.

Businesses looking to implement knowledge management across multiple channels should:

  • Deliver consistent multi-channel data. Users don’t want to have to repeat themselves by reentering data or explaining their issue multiple times at each stage of their interaction with customer service.
  • Optimize content so it is viewable on any channel. Information might look different on a smartphone than on a web browser, and graphics-intensive sites might provide lousy user experience for low-bandwidth customers.
  • Integrate all channels. Customer service agents should be able to seamlessly move among the different channels to provide a more seamless, unified customer response.

Some people prefer to call, some want to email, others would rather chat or post on social media. A knowledge management system can make it easier to accommodate all customers, regardless of their preference.

Improve Customer Service Responses

Customer service often depends upon a rapid, user-friendly response. Knowledge management systems can facilitate this by making data available rapidly, on a single screen if possible, with drill-down features that make further information available when necessary.

Businesses looking to speed up customer response with knowledge management should:

  • Design systems to answer queries fast. Impatient customers won’t be forgiving of underpowered hardware or glitchy software.
  • Provide a single dashboard or screen. Identify the key information to help serve customers quickly and summarize key customer data on a single, easy-to-read dashboard for customer service representatives.
  • Include comprehensive drill-down features. When a representative needs more information about a customer or transaction, they should be able to get to it from the main screen without going into another system or location.
  • Prevent unnecessary delays. Any additional steps or unnecessary information can result in customer frustration, dropped calls, and customer churn.

Callers expect quick answers based on the correct data. Doing everything possible to provide them with those answers is essential.

Increase Customer Self-Service 

Online knowledge bases may be giving way to artificial intelligence (AI) and chatbots in some cases, but they are not going away—and many of them are poorly designed or outdated. A knowledge management system can be used to help overhaul a business’s online knowledge base with the following steps:

  • Enhance online search. Making it easy for users to find information quickly, without wading through endless documentation, will improve user experience and customer satisfaction.
  • Devise good systems of taxonomy. Identify the information customers want and how they search for it, and then make it easy for those keywords and search terms to provide relevant results.

Customers are comfortable and familiar with online searches, and delivering bite-sized answers in an easy format can help improve their experience.

How to Design a Knowledge Management System for Customer Service

When designing or implementing a knowledge management system for the specific use of customer service, there are a few things to consider that will help ensure a better result.

Include Customer Service Representative Training

Organizations often focus their knowledge management efforts on the customer, but it must be a resource employees can use to better serve customers. When designing the system, incorporate training modules, use the knowledge base as a training aid during calls, and make it easy for representatives to find the data they need.

Without well-trained agents, any knowledge management system will flounder. Ensure the system serves both customers and agents, especially those learning the trade. Knowledgeable agents provide the best service.

Involve Customer Service Representatives in the Design Phase

One of the flaws of software design is that programmers don’t always understand or take the time to discover the needs of system users. When designing or implementing a knowledge management system, make sure that the system meets the needs of those front-line workers who will use it. Gain their input, let them try out the system at various stages in the build, and find metrics that align with their duties.

Integrate Related Systems

Knowledge management, Customer Relationship Management (CRM), contact center and key sales or management systems should not be separate islands within the enterprise. Avoid systems that are difficult or costly to integrate in favor of platforms that can easily fit into existing infrastructure. A centralized knowledge hub should align fully and integrate well with all other key customer facing systems.

Incorporate Automation

Some call centers use automated voice response systems to reduce call volume, but automation can also be used to deliver better customer service. Implementing response chat systems that provide easy call turnovers to customer representatives can prevent long wait times and boost caller satisfaction. Implement chat systems that provide useful answers rapidly, ensure the system knows when to refer the customer to an agent, and provide a call-back option within a specified time.

Add Artificial Intelligence

AI systems like ChatGPT can be introduced into customer service to forward the mission of enhancing overall customer experience. For example, Natural Language Processing (NLP) AI can help interpret user intent rather than expecting users to know the right keywords to get the answer they need. NLP even takes into account industry-specific terminology, different languages, and special content like product names. Self-learning search engines continuously learn from every interaction to deliver increasingly accurate and targeted results.

AI and Chat are big advances, but they are tools and must always be fitted to a definite business purpose if they are to improve the customer experience. Seek out AI tools geared to vertical markets that would be better-suited to the needs of the specific audience.

Bottom Line: Using Knowledge Management Systems to Improve Customer Service

The modern customer is far different from those of even a decade ago. Knowledge management systems must be adjusted to cope with current needs by providing integrated, multi-channel systems that serve data in the format needed by agents and customers. Considering both customer and customer service representative needs when designing and implementing a system can help improve customer service and customer satisfaction while making staff more efficient and more effective.

]]>
Oracle Opens Cloud Region in Chicago https://www.datamation.com/cloud/oracle-opens-cloud-region-chicago/ Fri, 27 Jan 2023 20:25:57 +0000 https://www.datamation.com/?p=23820 Oracle is expanding its cloud infrastructure footprint in the U.S.

The Oracle Cloud Region in Chicago is the company’s fourth in the U.S. and serves the Midwest with infrastructure, applications, and data for optimal performance and latency, according to the company last month.

The company hopes that organizations will migrate mission-critical workloads from their own data centers to Oracle Cloud Infrastructure (OCI).

Focus on Security, Availability, Resiliency

The new Chicago Cloud Region will offer over 100 OCI services and applications, including Oracle Autonomous Database, MySQL HeatWave, OCI Data Science, Oracle Container Engine for Kubernetes, and Oracle Analytics. It will pay particular attention to high availability, data residency, disaster protection, and security.

A zero-trust architecture is used for maximum isolation of tenancies, supported by many integrated security services at no extra charge. Its design allows cloud regions to be deployed within separate secure and isolated realms for different uses to help customers fulfill security and compliance requirements. It has a DISA Impact Level 5 authorization for use by U.S. government organizations to store and process Controlled Unclassified Information (CUI) and National Security Systems (NSS) information.

The Chicago Cloud Region contains at least three fault domains, which are groupings of hardware that form logical data centers for high availability and resilience to hardware and network failures. The region also offers three availability domains connected with a high-performance network. Low-latency networking and high-speed data transfer are designed to attract demanding enterprise customers.

Disaster recovery (DR) capabilities harness other Oracle U.S. cloud regions. In addition, OCI’s distributed cloud solutions, including Dedicated Region and Exadata Cloud@Customer, can assist with applications where data proximity and low latency in specific locations are of critical importance.

Oracle has committed to powering all worldwide Oracle Cloud Regions with 100% renewable energy by 2025. Several Oracle Cloud regions, including regions in North America, are powered by renewable energy.

See more: Oracle Opens Innovation Lab in Chicago Market

“Innovation Hub”

Clay Magouyrk, EVP at OCI, said the Midwest is “a global innovation hub across key industries,” as it’s home to over “20% of the Fortune ‘500,’ 60% of all U.S. manufacturing, and the world’s largest financial derivatives exchange.”

“These industries are increasingly seeking secure cloud services to support their need for high-speed data transfer at ultra-low latency,” Magouyrk said.

Samia Tarraf, Oracle Business Group global lead at Accenture, said the Oracle Cloud Region in Chicago will “provide clients with more choices for their cloud deployments.”

“We look forward to collaborating with Oracle to help organizations in the Midwest more quickly and easily harness the business-changing benefits of the cloud,” Tarraf said.

Oracle in the Growing Cloud Market 

Oracle holds an estimated 2% of the cloud infrastructure market, as of Q3 2022, according to Synergy Research Group. AWS holds the first position at 34%.

Gartner numbers put the size of the annual cloud market at somewhere around $400 billion, with growth rates maintaining a steady climb of about 20% or more per year.

Oracle aims to carve out a larger slice of the pie. It now provides cloud services across 41 commercial and government cloud regions in 22 countries on six continents: Africa, Asia, Australia, Europe, North America, and South America.

“The days of the cloud being a one-size-fits-all proposition are long gone, and Oracle recognizes that its customers want freedom of choice in their cloud deployments,” said Chris Kanaracus, an analyst at IDC.

“By continuing to establish cloud regions at a rapid pace in strategic locations, such as the U.S. Midwest, Oracle is demonstrating a commitment to giving its customers as many options as possible to leverage the cloud on their terms.”

See more: Best Cloud Service Providers & Platforms

]]>