Samuel Greengard, Author at Datamation https://www.datamation.com/author/samuel-greengard/ Emerging Enterprise Tech Analysis and Products Wed, 18 Jan 2023 16:22:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 Best Threat Intelligence Platforms https://www.datamation.com/security/threat-intelligence/ Fri, 21 May 2021 23:11:38 +0000 https://www.datamation.com/?p=21262 Staying in front of security threats is an increasingly difficult proposition. Despite a mind-boggling array of sophisticated tools, solutions and systems, the risks continue to grow. 

That’s where threat intelligence enters the picture. It attempts to step beyond traditional antivirus and other malware protection and offer insights and protection proactively. As zero-day attacks and polymorphic malware flourish, these systems aim to ratchet up detection and protection, typically through data analytics and machine learning.

Threat intelligence platforms (TIPs) aggregate, ingest and organize data from a number of sources — including internal logs and external feeds — to spot risks early. They uses APIs, bots and other methods to examine data, such as IP addresses, website content, server names and characteristics and SSL certificates. Many platforms also rely on anonymous open source data sharing.

By examining patterns and various events and enriching the data, a TIP can spot unusual and threatening behaviors, tactics, techniques and procedures that can lead to an intrusion, data breach, ransomware or other cybersecurity problem. Many link to security information and event management (SIEM) solutions, endpoints, firewalls, APIs, intrusion prevention systems (IPSs) and other security components.  Many of the leading platforms also rely on human analysts to dig deeper.

As staff working in security operations centers (SOCs) attempt to gain the upper hand on security risks, bad actors and emerging attack vectors, many are tapping threat intelligence frameworks. The value of a TIP is that it helps teams prioritize risks and threats and automated security responses. Emergen Research reports that the global threat intelligence market will reach $20.28 billion by 2028. What’s more, many platforms are turning to AI and machine learning to improve real-time threat intelligence. 

Yet, all threat intelligence platforms aren’t created equal. It’s critical to understand what exactly a platform offers, how it works, what it costs and what the vendor’s roadmap is for the future. With millions of threat indicators appearing daily — and many of them increasingly sophisticated — organizations are recognizing that quick assessment and response is a critical element in preventing economic and reputational damage.

How to Select the Right Threat Intelligence Platform

A number of factors are important when choosing a threat intelligence platform. Among them:

  • What data does the platform include and what’s the source of this data? It’s important to know how and where the vendor is collecting data, including the original source, and how it processes data. This might include factors such as IP addresses and domain URLs, reputational scores, newly discovered security risks and known vulnerabilities.
  • What format is the data? Vendors typically offer data feeds in CSV, XML, STIX, PDF and JSON. Some provide APIs to accommodate web services. In addition, it’s important to understand how the data is packaged — or how it can be adapted. This may include reports, summaries and alerts, along with customized feeds for customers.
  • How does the vendor formulate reports and alerts? What methodologies does it use to combine and blend data feeds en route to developing advisories and alerts? Does it rely only on machine data or use trained analysts? What other ways does the vendor distinguish itself from its peers?
  • How often does the vendor update the intelligence data? Ideally, data connections are real-time or constantly updated throughout a day.
  • What’s the price for a subscription? Prices among vendors vary greatly, often based on the type of services an organization requires. Some TIP vendors offer tiered product offerings, including free or inexpensive basic versions. Typically, the cost for an organization is several thousand dollars per month.
  • What’s included in the package? It’s important to know what resources the vendor has for learning how to use the platform and whether it offers any training. It’s also essential to know what services and support the vendor provides. Is there a 24/7 helpline? Is it live phone support or email support? If it’s the latter, how soon does the vendor respond? 

10 top threat intelligence platforms

Jump to:

See more: IBM Begins Cloud Confidentiality Push

AlienVault USM

The unified security management (URM) solution, part of AT&T, provides threat detection, incident response and compliance management capabilities. It collects and analyzes data from across attack surfaces, aggregates risks and threats — and continually updates threat information. The solution is designed to work within an ecosystem of AlienApps, which enables organizations to orchestrate and automate actions, based on events.  

Pros

  • Robust cloud support, including automated AWS and Azure discovery
  • Offers pre-build templates along with highly customizable reports and dashboards
  • Highly automated
  • Offers forensic querying
  • High customer ratings

Cons

  • Can be difficult to configure and customize
  • Some users say the interface can be challenging
  • Some users complain about inadequate customer support

Anomali ThreatStream

Anomali offers a robust platform for threat intelligence. It consolidates threat management and automates detection of risks with a set of tools that collect, manage, integrate, investigate and share data within an organization and from outside. The platform is available for on-premises and cloud-native deployments and includes support for virtual machines and air-gapping.

Pros

  • Excellent user interface
  • A mature platform with a deep and broad set of features
  • Supports numerous data formats
  • First-rate reporting capabilities
  • High customer support ratings

Cons

  • Some users complain about the lack of flexibility and an inability to adequately customize the platform
  • Lacks some automated reporting features
  • Inability to fully integrate with SIEM systems and freely move data between various systems

CrowdStrike Falcon

The company has established itself as a leader in the TIP space. It offers next-generation endpoint protection by combining antivirus (AV), endpoint detection and response (EDR) and a 24/7 managed hunting service via a lightweight agent that’s installed on devices. CrowdStrike’s services include advanced threat intelligence reporting and access to intelligence analysts that tailor intelligence and responses to organization’s specific needs and requirements.

Pros

  • Large user base
  • Delivers high quality intelligence information using both machine and human analysis
  • Excellent and generally easy-to-use interface
  • Highly rated customer service and support
  • The lightweight agent doesn’t impact the performance and stability of systems

Cons

  • It’s a tiered service that can be pricey
  • Reporting functions aren’t as flexible as some users desire
  • Log management can be complex and confusing
  • Mac features lag behind Windows and Linux

FireEye Mandiant Threat Intelligence

The company has staked out a position as a pioneer and leader in the field. Its threat intelligence module is available as a software-as-services (SaaS) solution, and it combines both data analytics and human oversight to spot and thwart threats. FireEye includes a dashboard, machine intelligence functions and other tools to provide broad and deep real-time insights.

Pros

  • Delivers high-quality threat intelligence information due to both machine and human collection and analysis capabilities
  • Typically integrates well with other tools, such as SIEM
  • Offers a free version with limited features
  • Users give FireEye high ratings for customer support

Cons

  • Can require a high level of technical knowledge to interpret reports and use the platform effectively
  • Some users report that the platform generates too much technical data that’s not actionable

IBM X-Force Threat Intelligence Services

IBM offers an expansive platform for managing threat intelligence. At the center: the company’s blending of machine-readable real-time data and human oversight. IBM offers detailed intelligence reports on threat activity, malware, threat actor groups and industry assessments. Its enterprise intelligence management platform is designed to feed threat data to existing security systems within organizations.

Pros

  • Provides a high-quality and up-to-date view of threats collected from a wide array of sources
  • Forrester describes the “accuracy and specificity” of data as a core strength
  • Generates low false-positive rates

Cons

  • Some users complain that the interface could be more user friendly
  • Can be complex and difficult to use effectively
  • Intelligence information may be too general at times. Some users say the platform could provide more contextualized and precise information

IntSights External Threat Protection Suite

IntSights offers a threat intelligence platform that aggregates and enriches a diverse set of data sources. It includes a vulnerability risk analyzer and third party and dark web checker. The platform delivers information through a single dashboard, and it offers real-time context in order to prioritize risks and help organizations conduct investigations — and block threats.

Pros

  • Offers a well-designed and easy-to-use interface
  • Provides rich and varied data
  • Highly rated customer sales and support

Cons

  • Reporting features aren’t as flexible or robust as some users would like
  • Sometimes delivers too much unneeded data along with dated threat intelligence information
  • Limited information and insights into dark web activities and behaviors

Kaspersky Threat Intelligence Services

Although the company’s threat intelligence offering is only part of its overall focus on cybersecurity, the company is a leader in the threat intelligence space. It provides threat data feeds, threat lookups and digital footprint intelligence that can expose an organization’s weak spots. 

Pros

  • Provides high-quality threat data
  • The company is aggressively focused on adding third party-integrations and adding support for new data sources
  • Offers rich reporting capabilities

Cons

  • Users complain that the solution can be complex and at times difficult to use
  • Sometimes provides too much general or irrelevant data
  • The user community reports high false-positive rates
  • Lacks automation that other leading vendors provide in their TIP platforms

Mimecast Threat Intelligence 

With a focus on email security, Mimecast examines numerous data sources to detect attacks. The subscription-based cloud security service is designed to protect email systems from various types of threats, ranging from viruses to ransomware. This includes URL protection that identifies, blocks and rewrites malicious links in email. The threat intelligence platform also helps prevent users from accessing dangerous sites or downloading malicious content.

Pros

  • Highly scalable
  • URL protection methods are highly effective in thwarting phishing and malware
  • A security operations center continuously monitors and analyzes threats

Cons

  • A focus on email security means that an organization will likely require other threat intelligence solutions
  • Users complain that Mimecast provides minimal support for archived emails

Palo Alto Networks WildFire

Harnessing inline machine learning, bare metal analysis and dynamic and static analysis, WildFire delivers a threat intelligence platform designed for zero-day malware protection. The TIP blocks unknown and high-risk file types, scripts and other data by extracting pieces of files, analyzing them and conducting data analysis across hundreds of behavioral characteristics.

Pros

  • Incorporates machine learning
  • Uses a multi-layered approach to increase threat detection
  • Highly automated
  • Strong integration with SIEMs and other tools
  • Large user base of 35,000+ delivers excellent shared intelligence

Cons

  • Expensive compared to other platforms.
  • Can be difficult to set up, and it’s not easily customizable
  • Some users complain about the lack of customer support

Recorded Future

The vendor pulls and classifies data from “billions of entities” across languages and geographies to map relationships and spot threats. It combines advanced analytics and machine learning to discover, categorize and deliver real-time threat intelligence. Recorded Future also relies on a team of human analysts to guide data models and provide direction.

Pros

  • Delivers robust and extensive data collection capabilities and security intelligence
  • Highly flexible with different modules designed for specific needs and risks
  • Excellent interface
  • Strong search capabilities, including the ability to set up automated queries
  • Supports numerous types of threat intelligence, including brand, SecOps, threats, vulnerabilities, geopolitical and third party

Cons

  • Licensing model can be complex and expensive if a company uses multiple modules
  • Some users complain that the API is not as mature and robust as they would like
  • May require considerable training to use all the various features and capabilities

See more: Managed Security Services Provider Releases Integrated Cybersecurity Platform

Comparison Table of Threat Intelligence Platforms

Threat Intelligence Platform Pros Cons
AlienVault USM

·     Strong automation

·     Offers pre-build templates

·     Flexible

·     Features forensic querying

·     Can be difficult to configure and customize

·     Interface can be confusing

·     Users say customer support is sometimes
lacking

Anomali ThreatStream

·     Excellent user interface

·     Rich feature set

·     Support for numerous data formats

·     Strong reporting features

·     Can be difficult to customize

·     Missing some automated reporting features

·     Doesn’t always play well with SIEMs

CrowdStrike Falcon

·     Large user base

·     Provides high quality threat information

·     Excellent interface

·     Lightweight agent uses few system resources

·     Can be pricy

·     Some reporting functions lack flexibility

·     Log management can be confusing

·     Mac features lag behind Windows and Linux

FireEye Threat Intelligence

·     Provides high quality threat information

·     Integrates well with SIEMs and other tools

·     Excellent customer support

·     May require deep technical knowledge

·     Some users complain about receiving too much data

IBM X-Force

·     Extensive data collection capabilities

·     Provides high quality threat information

·     Produces low false-positive rates

·     Some users find the user interface confusing

·     May require deep technical knowledge

·     Information is sometimes too broad and non-specific

IntSights External Threat Protection Suite

·     First-rate user interface

·     Offers rich and varied threat information

·     Customer support is highly rated by users

·     Lacks some desirable reporting features

·     Delivers too much nonspecific information at times

·     Users say that some threat intelligence information is dated

Kaspersky Threat Intelligence Services

·     Provides high quality threat information

·     Vendor is aggressively adding features

·     Rich reporting capabilities

·     Some users say the platform is complex

·     Sometimes provides too much general data

·     High false-positive rates

·     Lacks some automation features

Mimecast Threat Intelligence

·     Highly scalable

·     Effective in preventing phishing attacks

·     Continually updates solution based on changing threat landscape

·     Effective only for email, thus the need for broader threat intelligence is necessary

·     Limited support for scanning archived emails

Palo Alto Networks ·     Highly automated, with a multi-layer detection
framework·     Strong SIEM support·     Large user base for threat intelligence information sharing

·     Can be expensive

·     Difficult to set up and customize

·     Some users complain about inadequate customer
support

Recorded Future

·     Robust and extensive data collection

·     Highly flexible

·     Excellent user interface

·     Provides broad threat intelligence 

·     Can be expensive

·     Some users complain about API support

·     Can be complicated and difficult to set up and use

 

]]>
How Does Edge Computing Work & What Are the Benefits? https://www.datamation.com/edge-computing/edge-computing/ Fri, 09 Apr 2021 22:55:31 +0000 https://www.datamation.com/?p=20939 Edge computing is a broad term that refers to a highly distributed computing framework that moves compute and storage resources closer to the exact point they’re needed—so they’re available at the moment they’re needed. Edge computing companies provide solutions that reduces latency, speeds processing, optimizes bandwidth and introduces entirely different features and capabilities that aren’t possible with data centers.

The ability to process data analytics on an edge network enables features and capabilities that are crucial for advanced digital frameworks, including the Fourth Industrial Revolution. This includes IoT software, highly integrated supply chains, machine learning, artificial intelligence (AI), mobile connectivity, virtual reality and augmented reality, digital twins, robotics, 3D fabrication, medical devices, autonomous vehicles, connected video cameras, smart home automation and much more.

Although conventional servers, storage, and cloud computing continue to play a key role in computing, edge technologies are radically redefining business and life. By moving data processing at or near the source of data generation, edge devices become smarter and they’re able to handle tasks that would have been unimaginable only a few years ago. This data fuels real-time insights and applications ranging from sleep tracking and ridesharing to the condition of a drilling bit on an oil rig.

Digital business increasingly depends on handling tasks at the point where a device or person resides. The ability to construct a distributed computing model and harness localized computing power is at the foundation of the Internet of Things (IoT) and today’s advanced digital technologies.

Edge computing fundamentally rewires and revamps the way organizations generate, manage and consume data. Gartner estimates that by 2025, 75% of data will be created and processed outside the traditional data center or cloud. 

How Does Edge Computing Work?

Edge computing works by capturing and processing information as close to the source of the data or desired event as possible. It relies on sensors, computing devices and machinery to collect data and feed it to edge servers or the cloud. Depending on the desired task and outcome, this data might feed analytics and machine learning systems, deliver automation capabilities or offer visibility into the current state of a device, system or product.

Today, most data calculations take place in the cloud or at a datacenter. However, as organizations migrate to an edge model with IoT devices, there’s a need to deploy edge servers, gateway devices and other gear that reduce the time and distance required for computing tasks—and connect the entire infrastructure. Part of this infrastructure may include smaller edge data centers located in secondary cities or even rural areas, or cloud containers that can easily be moved across clouds and systems, as needed.

Yet edge data centers aren’t the only way to process data. In some cases, IoT devices might process data onboard, or send the data to a smartphone, an edge server or storage device to handle calculations. In fact, a variety of technologies can make up an edge network. These include mobile edge computing that works over wireless channels; fog computing that incorporates infrastructure that uses clouds and other storage to place data in the most desirable location; and so-called cloudlets that serve as ultra-small data centers.

An edge framework introduces flexibility, agility and scalability that’s required for a growing array of business use cases. For example, a sensor might provide real-time updates about the temperature a vaccine is stored at and whether it has been kept at a required temperature throughout transport.

Sensors and edge IoT devices can track traffic patterns and provide real-time insights into congestion and routing. And motion sensors can incorporate AI algorithms that detect when an earthquake has occurred to provide an early warning that allows businesses and homes to shut off gas supplies and other systems that could result in a fire or explosion.

What is an Edge Device?

Within an edge network, systems that capture and transmit data serve as edge devices. This can include standalone devices as well as gateways that connect to devices downstream and interface with systems upstream. However, the concept increasingly revolves around IoT sensors and devices that reside at the edge. These systems can incorporate an array of sensing capabilities, including light, sound, magnetic fields, motion, moisture, tactile capabilities, gravity, electrical fields and chemicals. They may process data using apps or via on-board computing capabilities, and they often include batteries.

Edge and IoT devices can tap a variety of communications protocols, including

  • Bluetooth Low Energy (BLE)
  • RFID
  • Wi-Fi
  • Zigbee
  • Z-Wave
  • Cellular (including 5G)
  • NFC
  • Ethernet

Edge IoT devices typically send data over an open systems interconnection (OSI) framework that unites disparate devices and standards. These systems also connect with cloud and Internet protocols such as AMQP, MQTT, CoAP, and HTTP. Typically, the framework relies on a specialized edge device, such as a smart gateway, to route and transfer data and manage all the connections.

The microprocessors used in IoT devices continue to advance. Not only are some chips able to accommodate onboard processing—including AI and machine learning functions—they’re becoming smarter and more energy efficient. Some can wake on demand and include hard-wired capabilities. 5G chips are also changing the IoT and the edge by imparting devices with faster and more robust communications capabilities. As a result, the IoT and edge frameworks continue to advance and gain new capabilities.

How Does an Edge Gateway Work?

In order for IoT devices to deliver real value, there must be a way to connect the edge to the cloud and corporate data centers.

An edge gateway serves this purpose. After IoT edge devices collect data and local processing takes place—either on the device or within a separate device such as a smartphone or cloudlet—a gateway manages the flow of data between the edge network and a cloud or data center. Using either conventional coding or machine learning capabilities, it can send only necessary or optimal data, thus optimizing bandwidth and cutting costs.

An edge gateway also interacts with IoT edge devices downstream, telling them when to switch on and off or how to adjust to conditions. When there’s a need for data it can ping the device.

This enables analytics and machine learning on the edge, the ability to isolate devices, manage traffic patterns more effectively, and connect the gateway to other gateways, thus establishing a larger and more modular network of connected devices. As a result, an IoT framework can operate in a highly dynamic way.

What is an Edge Network?

An edge network resides outside or adjacent to a centralized network. Essentially, the edge network feeds data to the main network—and pulls data from it as needed. Early edge networks encompassed content delivery networks (CDNs) that helped speed video delivery to mobile devices.

But today’s edge networks are increasingly modular and interconnected—and carry a broad array of data. Today’s software-defined networking tools deliver enormous flexibility, scalability and customization for edge networks. In many cases, application programming interfaces (APIs) extend the reach of an edge network while automating workflows.

Edge networks can support an array of advanced capabilities, including traffic scalability and load balancing, context-aware routing that allows data to follow the most efficient path and avoid disruptions, and real-time controls that make it possible to change rules, logic and programming dynamically—depending on internal needs or external conditions.

Advanced edge networks support edge computing capabilities. This makes it possible to run algorithms and applications on the edge—and process and distribute data in more dynamic ways.

Edge Computing vs. Cloud Computing

There are fundamental differences between cloud computing and edge computing. The former relies on a central computing model that delivers services, processes and data services, while the latter refers to a computing model that’s highly distributed.

Edge environments typically strive to move applications and data processing as close to the data-generation site as possible. As robotics, drones, autonomous vehicles, digital twins and numerous other digital technologies mature, the need to handle computing outside the cloud grows.

Not surprisingly, organizations typically use both cloud and edge networks to design a modern IoT framework. The two technology platforms are not oppositional; they are complementary. Each has a role in building a modern data framework.

While edge computing can deliver a more agile and flexible framework—and reduce latency on IoT devices—it’s not equipped to accommodate enormous volumes of data that might feed an analytics application or smart city framework. What’s more, cloud bandwidth is highly scalable and cloud computing often supports a more streamlined IT and programming framework.

What are the Benefits of Edge Computing?

As organizations wade deeper into the digital realm, edge computing and edge technologies eventually become a necessity. There’s simply no way to tie together vast networks of IoT edge devices without a nimbler and more flexible framework for computing, data management and running applications outside a datacenter. Edge computing boosts device performance and data agility. It also can reduce the need for more expensive cloud resources, and thus save money.

Also, because edge computing networks are highly distributed and essentially run as smaller interconnected networks, it’s possible to use hardware and software in highly targeted and specialized way.

This makes it possible, for example, to use different programming languages with different attributes and runtimes to achieve specific performance results. The downside is that heterogeneous edge computing frameworks introduce greater potential complexity and security concerns.

Edge Computing Security Concerns

In fact, physical and virtual security can pose a significant challenge for organizations using IoT edge devices and edge computing networks. There are a number of potential problems.

One of the biggest risks is dealing with thousands or even hundreds of thousands of sensors and devices from different manufacturers that rely on different firmware, protocols and standards. Adding to the problem: many organizations struggle to track edge IoT devices and other assets. In some cases, organizations wind up with different business or IT groups setting up devices that operate independent of each other.

Edge computing devices present other security challenges. Since many of them lack an interface, it’s necessary to manage security settings using outside devices and programs. When an organization deploys a large number of these devices, the security challenges become magnified. As a result, it’s important to focus security on a number of factors and issues, including device firmware and operating systems, TCP/IP stacks, network design and data security tools, such as encryption at rest and encryption in motion and data tokenization.

Network segmentation is also critical. It’s wise to isolate key systems, components and controls—and have ways to shut down a node or system that has been attacked. By segmenting and air gapping groups of devices and systems, it’s possible to prevent a breach or failure at one point in the network that could lead to the failure for the entire edge computing platform.

In addition, it’s critical to use standard security tools and strategies such as auditing the network and devices, changing passwords, disabling unneeded features that may pose a risk, and retiring devices that are no longer needed.

Edge Computing Companies

Over the last few years, numerous edge computing companies have entered the edge computing space. These vendors address different market niches. Some, such as Dell, Cisco, HPE sell networking and computing equipment that supports various aspects of edge and IoT frameworks, ranging from control systems to telecommunications.

Others, such as AWS, Microsoft Azure and Google Cloud, deliver cloud-based software and services that support IoT and edge functionality—including device management, machine learning and specialized analytics. Still others, such as PTC ThingWorx and Particle, deliver sophisticated platforms that connect and manage large numbers of edge IoT devices.

The Edge Continues Expanding

Organizations of all shapes and sizes benefit from a clear strategy for navigating edge computing and IoT devices. Over the next few years, the need to processes data on the edge will grow. But it isn’t only the volume of data that’s important. It’s also the velocity of data moving within organizations and across business partners and supply chains.

As digital frameworks evolve and need to compute within decentralized environments grows, edge infrastructure becomes indispensable.

]]>
Best IoT Platforms & Software https://www.datamation.com/networks/iot-platforms/ Tue, 23 Feb 2021 23:04:18 +0000 https://www.datamation.com/?p=20758 Over a few short years, the Internet of Things (IoT) has evolved from an intriguing concept with limited capabilities into a full-fledged platform for IT and business.

Organization are increasingly turning IoT platforms to perform an array of tasks, ranging from real-time inventory visibility and predictive maintenance systems to energy and smart buildings. They’re also adopting Industry 4.0 tools like digital twins. It’s safe to say that no sector has been left touched by the IoT – cloud computing in particular is linked to IoT.

IoT software plays an important and growing role in connected systems. They introduce an architecture that tames some of the rough edges associated with connecting devices, standards, protocols and software systems. Instead of building an IoT framework entirely from scratch, they tie together device management, data collection, data analytics, machine learning (ML), IT integration and cybersecurity. IoT and Industrial IoT (IIoT) platforms simplify tasks, trims costs and drive performance gains.

Of course, finding the right platform is essential. Pricing models, standards, cloud connectivity and elasticity, system flexibility and security methods vary greatly. Some platforms excel at connecting sensors while others are focused more on communications and data processing.

As a result, it’s important to consider what your organization’s requirements are for hardware, data access, reporting, and budgeting before selecting an IoT platform. Different business models and different IT infrastructures are better suited to one platform or another. 

How to Select the Best IoT Platform

There are several factors to consider when analyzing the IoT platform marketplace. These include:

What are we trying to achieve?

As with any information technology solution, it’s important to start with an assessment of how the IoT can automate and improve business practices and processes. This includes productivity gains, faster and better functionality and lower costs.

What technology do we already have in place?

It’s essential to analyze existing IT, cloud and network frameworks to determine their fit with an IoT platform. Along the way, an organization must determine whether current IoT devices will work with the framework and which, if any, require upgrading, retrofitting or outright replacement.

Is the platform flexible?

The IoT space is evolving at a furious rate. While all these vendors support some level of flexibility, not all approaches are equal and some are a better match for certain IoT configurations. There’s also a need to support a growing array of open source components. Matching your roadmap with theirs is essential.

What type of data analytics and machine learning (ML) does it support?

IoT frameworks are designed to automate data collection and processing on the edge. Machine learning is a key part of this picture. As a result, it’s important to understand whether an IoT platform supports ML.

What security is in place?

The IoT is notoriously weak in regard to cybersecurity. Device manufacturers rely on various standards and approaches, which often results in gaps and vulnerabilities. A platform may provide some help.

What’s the vendor’s strategy and roadmap?

It’s always wise to survey the vendor to determine how it approaches updates, patches, security issues and other factors. Similarly, it’s important to understand how it handles customer support and how it sees the platform evolving.

What’s the cost and the ROI?

It’s vital to consider the initial cost of an IoT platform but also total cost of ownership (TCO) and what type of return on investment it can deliver to your organization.

Here are ten leading IoT platforms:

Jump to:

Leading IoT Platforms

AWS IoT

AWS IoT is designed to auto-provision, manage and support connected systems from the edge to the cloud. It includes analytics and data management features, tools for integrating devices, and multi-layered security mechanisms such as authentication, encryption and access controls. The focus is on industrial, connected home and commercials applications. AWS IoT integrates with other AWS solutions and components as well as open source frameworks.

Pros

  • Highly scalable cloud infrastructure supports billions of devices and trillions of messages.
  • Highly specialized tools and device software streamline workflows and processes.
  • Offers pay-as-you-go pricing with templates and ready-build solutions for specific industries.
  • AWS has partnerships with top IoT industry vendors, including Ayla, Bosch, Domo, Deloitte, Kinesis, Wipro and Verizon.

Cons

  • Users report that it can be difficult to setup and use.
  • Some users complain that documentation and product support are at times lacking.
  • Debugging software and connections can be a problem.
  • IIoT capabilities are not as fully built out.

Ayla Agile IoT Platform

This cloud-based platform-as-a-service framework supports commercial and industrial solutions suited to a variety of vertical industries, including food services, appliances and manufacturing. Ayla Agile IoT Platform addresses edge connectivity, device management, data aggregation and processing, and enhanced security functions.

Pros

  • Partnerships in place with leading cloud and service providers, including AWS, Google Cloud, IBM and Qualcomm. A cloud connection agent simplifies connectivity and support.
  • Receives high marks for ease of use and the large number of devices it supports.
  • Users report an intuitive user interface (UI) and strong notification and reporting capabilities, including filters and drill-down views of devices.
  • Offers digital twins and advanced diagnostic functions.

Cons

  • Some complain that the platform lacks desired features and capabilities.
  • The focus is on three primary areas: home automation systems, discrete and process manufacturing, and telecoms/Internet Service Providers.
  • May require integration with more advanced solutions to deliver the full functionality required by a business.

Azure IoT

The cloud-hosted platform ties together numerous templates, tools and open source components to support IoT initiatives ranging from condition monitoring to predictive maintenance. Azure IoT Hub manages bidirectional communications to and from devices, including provisioning and authentication. The platform supports numerous industries and use cases, including process manufacturing, energy, healthcare, retail and transportation.

Pros

  • The platform supports hybrid IoT applications through Azure IoT Edge and Azure Stack.
  • Offers built-in device management and provisioning to connect and manage IoT devices at scale.
  • Supports digital twins and offers strong analytics and ML support.
  • Includes a security-enhanced communication channel for sending and receiving data from IoT devices.

Cons

  • Can be complex to set up and use. An extensive knowledge has lots of information about the platform but users report difficulty finding answers.
  • Some users complain that the platform lacks key operational functionality.

Cisco Kinetic

The IoT operations platform handles complex gateway management tasks, including provisioning and monitoring. It also tackles edge and fog processing of data and includes a data control module that facilitates the movement of data using policy enforcement mechanisms. Kinetic supports large IoT deployments and rules-based policy management across multi-cloud environments.

Pros

  • The platform is highly scalable and modular. It can connect a wide-range of devices.
  • Provides deep visibility into nodes, microservices and other components—with a minimal footprint.
  • Strong real-time data visualization with access to various data sources, including IoT devices and databases.
  • Offers pre-built widgets and templates for data visualization and other tasks.

Cons

  • Lacks specialized IoT components that may be necessary for an IoT project.
  • Users report that setting up and using the platform can be challenging.
  • There are some limitations for non-Cisco networking and infrastructure hardware.

Google Cloud IoT

Google Cloud IoT delivers an intelligent platform for building and managing a highly scalable network of IoT devices. It’s designed to manage devices and data on the edge and into the cloud. The platform offers strong analytics, ML and automation features that are valuable for predictive maintenance, real-time asset tracking, logistics and supply chain management and smart city and building initiatives.

Pros

  • Offers a powerful AI platform, including more advanced Vision AI and Video AI that drive insights from images and video in the cloud and on the edge.
  • Offers robust IoT developer kits.
  • Extensive ecosystem of partners, including Accenture, NetApp, Palo Alto Networks, Siemens and Sigfox.

Cons

  • Users report challenges related to setting up, configuring and using the platform.
  • Security and privacy settings can be confusing, especially for APIs and authentication.
  • Limited support for using and importing datasets from outside the Google ecosystem. 

IBM Watson IoT

IBM’s fully managed and cloud-hosted IoT platform delivers a cloud-hosted environment that tackles everything from device registration and authentication to connectivity and data management/analytics. Areas of specialization include enterprise asset management, facilities management and systems engineering.

Pros

  • Highly scalable and flexible.
  • Supports powerful AI-driven analytics and ML functions that can be adapted to an industry or business.
  • Watson cognitive APIs support interconnectivity across devices and vendors.
  • The platform supports blockchain.

Cons

  • Some users report a steep learning curve.
  • Limited data storage formats and options can make global data management challenging. 

Oracle IoT Intelligent Applications

Oracle delivers broad and deep visibility into IoT devices. The cloud-based platform is optimized for smart manufacturing, connected assets, connected logistics, workplace safety and other tasks. It supports the use of real-time data for visualizations, mapping and automation.

Pros

  • Built-in integrations and API framework for ERP, supply chain management (SCM) and other enterprise systems and data.
  • Offers pre-built dashboards and widgets that facilitate deep visibility into data and events.
  • Supports digital twins and 3D visualizations.
  • Offers pre-build threads for enterprise applications such as manufacturing, maintenance, transportations and warehouse management.

Cons

  • Some functions and features are limited to using an Oracle infrastructure.
  • May require third party device management solutions for specialized needs and requirements.
  • Oracle’s IoT Cloud Service doesn’t support a complete range of IIoT protocols and third party IoT products. 

Particle

Particle offers a broad and extensive cloud-to-edge framework for managing connected devices. It accommodates a broad array of tasks, including asset tracking, fleet management, predictive maintenance, environmental monitoring, real-time order fulfillment and remote monitoring and controls.

Pros

  • Global IoT connectivity through Wi-Fi, cellular and BLE in over 150 countries.
  • Strong security features, including built-in device encryption, PKI authentication, robust security logging and strong privacy controls.
  • Strong analytics and ML features.
  • Excellent scalability, including auto-provisioning and device scaling.
  • Large community of users and strong support capabilities.

Cons

  • Complex configurations can be prone to disruptions and interruptions.
  • High upfront costs but these are tempered by reduced operations and development costs.
  • Users complain that the environment can be complex. 

PTC ThingWorx

The ThingWorx platform is a robust and fully developed industrial IoT (IIoT) solutions. It addresses a wide range of manufacturing, service and engineering use cases through end-to-end device auto-provisioning and management. ThingWorx specializes in remote access monitoring, remote maintenance and service, predictive capabilities and other functions on-premises and in the cloud.

Pros

  • Has an extensive global ecosystem of technology partners and systems integrators.
  • Uses more than 150 drivers to boost standardization and connectivity across heterogenous environments.
  • ThingWorx Flow offers powerful orchestration capabilities in a visual environment.
  • Highly rated service and support.

 Cons

  • Users say the digital twin component can be difficult to integrate and use with some industrial applications.
  • Lacks some standardized tools for builds and deployments as well as code analysis and verification.
  • Some users complain about a lack of tools and widgets to manage IoT devices.

SAP Internet of Things

The platform provides cloud, edge and data technologies required to build out the IoT. It also aggregates IoT data to drive analytics, machine learning, and blockchain technologies through SAP Analytics Cloud. In addition, SAP offers various microservices that can be deployed across edge computing and IoT devices. These can be used for smart systems and supply chain optimization.

Pros

  • Strong IoT data management, analytics and ML capabilities. Supports data persistence, streaming analytics, predictive analytics, and contextual features.
  • Supports digital twins through sensor and contextual business data.
  • Strong automation capabilities through IoT application templates.
  • Highly scalable.
  • Top rated service and support.

Cons

  • Users report difficulties integrating components with legacy IT and non SAP components.
  • May require significant customization in order to build out an IoT ecosystem.
  • Some users report difficulties finding features and navigating to desired locations within the platform.
  • Pricing model can be complex.

Best IoT Platform Comparison Chart

IoT Vendor

Pros

Cons

 

AWS IoT

  • Excellent templates
  • Pay as you go pricing
  • Broad ecosystem of partners
  • Debugging can be difficult
  • IIoT capabilities are somewhat limited
 

Ayla Agile IoT Platform

  • Strong partnerships
  • Users find it easy to use with an intuitive interface
  • Strong digital twin support
  • Users would like to see additional features
  • IoT focus is narrower than other vendors
  • May require additional integration
 

Azure IoT

  • Powerful device management
  • Strong digital twin support
  • Focus on security
  • Some users complain about missing features and functionality
  • Can be expensive
 

Cisco Kinetic

  • Scalable and modular platform
  • Delivers deep visibility into devices and microservices
  • Pre-built templates for visualizations and other tasks
  • Implementing and using the platform can be difficult
  • May not support non-Cisco networking components
  • No IoT device hardware offered
 

Google Cloud IoT

  •     Best in class AI, including vision and video AI
  • Robust developer kits
  • Extensive partner network
  • Security and privacy controls can be difficult and confusing
  • Limited support for data residing outside the Google ecosystem
 

IBM Watson IoT

  • Strong support for AI driven analytics and ML
  • Robust APIs support interconnectivity
  • Support for blockchain
  • Some users complain about limited storage options
  • Expensive
 

Oracle IoT Intelligent Applications

  • Build in integrations and APIs
  • Excellent dashboard and widgets
  • Strong support for digital twins and 3D visualizations
  •  Some functions and features aren’t available outside an Oracle infrastructure
  • May require 3rd party device management add-ons
  • Doesn’t support a full range of IIoT protocols
 

Particle

  • Strong security features
  • Highly scalable and flexible
  • Large user community
  •  Expensive
  • Steep learning curve for certain configurations
 

PTC ThingWorx

  • Focus on standardization
  • Powerful orchestration tools
  • Top notch service and support
  • Lacks standardized tools for certain tasks
  • Lack of widgets
  • Expensive
 

SAP Internet of Things

  •  Strong digital twins support
  • Powerful automation
  • Highly scalable
  • Excellent service and support
  • Pricing model can be complex
  • Users say navigation can be a challenge
  • May require heavy customization

 

]]>
Top APM Tools & Software https://www.datamation.com/applications/top-apm-tools/ Wed, 23 Dec 2020 06:00:00 +0000 http://datamation.com/2019/06/27/top-app-monitoring-software/

Given the speed and complexity of the cloud computing era, application performance monitoring (APM) is an increasingly important element of successful business. When critical applications like Big Data software runs slowly, productivity drops, IT costs rise, and employees and customers can become frustrated.

Application performance monitoring targets these issues. It provides tools for managing code, understanding application dependencies, viewing transaction times and other technical indicators, and gauging overall user experience.

APM tools help a business know when it’s on track with overall objectives, but they also aid developers in understanding whether or not they’re coding effectively. For example, a tool may track data analytics metrics in relation to the customer journey and their overall experience. Or they may provide insight into performance issues related to servers, storage, software as a service, or other factors. On the other hand, it might deliver code level visibility into Java or .NET apps and spot problems.

These tools typically fall into three basic categories: metrics based, code level performance and network based. In the solutions below, you’ll see that many tools combine some element of each of these elements.

Although every organization can benefit from APM tools, finding the right vendor and specific solution can prove challenging. Different products approach application performance monitoring in different ways, including the type of infrastructure, level of automation, the use of machine learning, and the ability to integrate with cloud applications.

Choosing the Right Application Performance Monitoring Tool: Three Tips

Understand your APM needs

Organizations should consider these functional areas when selecting a solution:

  • Digital experience monitoring (DEM), which helps optimize performance for a digital agent, human or machine, particularly as an entity connects to enterprise applications and services;
  • Application discovery, tracing and diagnostics (ADTD), which diagnoses processes and examines relationships between application servers, nodes and other systems;
  • Artificial intelligence for IT operations (AIOps). This use of artificial intelligence combine big data and machine learning functionality to support IT operations.

Select the right framework

There are a number of key considerations in selecting an application. These include cloud integration, IoT integration, database support, dashboard visibility and controls, reporting (including historical analytics) code language support, the ability to conduct end-to-end tracing, cross-application tracking capabilities, code-level diagnostics and tracing, and notification and alert capabilities. Each of these elements comprise the framework – and the framework must check enough of your boxes.

Choose the vendor that fits your particular needs

Due diligence is vitally important when selecting a vendor. It’s wise to thoroughly question an APM solutions provider and understand its business philosophy, technology framework and vision for the future. Other important considerations are licensing costs, update policies, service level agreement (SLA) and overall customer support. But most important: are they offering some level of flexibility for your specific business needs? Are they willing to negotiate?

In this Datamation article for application performance monitoring tools we have identified 10 top vendors/tools:

Jump to:

Broadcom (CA Technologies)

Value proposition for potential buyers: Broadcom purchased CA Technologies in late 2018. The platform—available both on premises and as a SaaS solution—delivers core APM, infrastructure, network, end-user, cloud, mainframe and business transaction monitoring within the vendor’s CA Digital Experience Insights (DXI) platform. The focus is heavily on actionable analytics that identify problem points and promote improved digital experiences. Broadcom is ranked as a “Leader” in Gartner’s MQ.

Key values/differentiators:

  • The product offers a comprehensive approach to APM. Over the last few years, the vendor has focused on modernizing its underlying technology architecture, including through a greater use of open source tools. One of the main areas of focus is visual analytics.
  • The vendor also has focused on improving the usability of its solutions through improved assisted triage workflow, Gartner noted. The system is designed to aid in identifying performance anomalies and business transactions that can be further investigated through detailed drill-downs.
  • CA aims to extend its coverage of applications and IT infrastructures. It is continuing to expand and extend core functionality to ingest a broader array of data sources through built-in functionality as well as through open source components.

Cisco (AppDynamics)

Value proposition for potential buyers: Cisco acquired AppDynamics in 2017. The APM solution is available as both an on-premises and SaaS solution. The platform offers core APM monitoring but also analytics tools for tracking end users and various types of infrastructure, including mainframes, cloud, and SAP S/4HANA. Gartner ranks Cisco (AppDynamics) as a “Leader” on its MQ.

Key values/differentiators:

  • Cisco has added machine learning technology to the AppDynamics platform. The vendor also has added features to broaden and deepen support for the cloud. This includes microservices, serverless computing, container and hybrid environments, and cloud-native integrations.
  • The platform offers comprehensive support for tracking business processes and metrics that can aid in supporting IT as well as lines of business and the overall enterprise.
  • Cisco has established a roadmap focused on enhancing its solutions through added cloud capabilities, improved business performance monitoring, an improved UI, and greater support for commercial off-the-shelf (COTS) applications.

Dell Foglight (Quest)

Value proposition for potential buyers: The website monitoring platform focuses on risk assessment, diagnostics, user management and server monitoring for online environments. Quest is designed to simplify IT management by encompassing a framework of data protection, database management, security, performance monitoring and more.

Key values/differentiators:

  • The Quest platform supports nearly every approach and environment, including Active Directory, Azure Active Directly, Exchange, Google, Hadoop, Office 365, Oracle, SharePoint, SQL Server and VMware.
  • The solution supports database monitoring and performance optimization. It includes advanced workload analytics tools that help organizations consolidate and standardize database performance management across diverse multi-platform environments.
  • Foglight offers a “single pane of glass” into heterogeneous virtual environments. This simplifies application monitoring and performance management for various tasks, including asset tracking, changes in machines and VM migrations. The program is particularly adept at displaying information about storage utilization, memory usage, CPU performance, disk inputs/outputs (I/O), and network I/O.

Dynatrace

Value proposition for potential buyers: The privately held firm offers an APM solution that is available on-premises, on a managed services basis or as a SaaS offering. It includes APM, DEM, infrastructure, network monitoring and AIOps capabilities—along with real-time topology and AI algorithms that automatically detect anomalies and understand the business impact across users, applications, and infrastructure. Dynatrace is ranked as a “Leader” in Gartner’s MQ.

Key values/differentiators:

  • The vendor supports a wide array of environments, from mainframe to COTS and SaaS through a combination of legacy and newer technology infrastructure and tools. The firm’s APM approach is supported by its OneAgent architecture, which focuses on total automation and zero configuration.
  • Dynatrace has continually expanded the breadth and depth of the APM solution, including greater support for cloud frameworks. In late 2017, the vendor acquired Qumrun. This added session replay to its DEM module.
  • Gartner noted that the vendor’s roadmap includes expanded support for “multicloud and hybrid architectures. This will support the greater use of purpose-built AI for faster root cause analysis and improved automation and remediation. The vendor is also expanding the use of session replays as part of customer and business journey analysis.

IBM

Value proposition for potential buyers: IBM offers both on-premises and SaaS-based APM solutions. Each uses an approach specifically optimized for the user’s application environment. IBM’s large network of business partners, and the product’s ability to connect with a wide range of products and solutions, makes it a popular choice for larger and mid-size organizations. Gartner ranks IBM as a “Challenger” on its MQ.

Key values/differentiators:

  • The SaaS package is a multi-tenant APM solution that is part of IBM Cloud App Management Base and Advanced. It includes a web-based UI and configurable dashboard that monitors AWS, Azure and other cloud environments using cloud-native APIs.
  • IBM includes RUM synthetic transactions monitoring, log analytics, middleware monitoring, multivariate anomaly detection, and business insight through IBM Business Monitor.
  • IBM is a leader in AI and cognitive computing. It leverages the Watson AI platform within its APM solution. It also incorporates powerful open source tools, such as the Grafana plugin, for advanced monitoring, analytics, and visualizations.

Microsoft

Value proposition for potential buyers: Microsoft delivers full APM support only as a SaaS solution, though the vendor’s older System Center Operations Manager can tackle basic functions. The solution is designed to work with Azure, and it supports .NET and Java applications, along with apps written in Python, Go and Node.js. Microsoft is ranked as a “Challenger” on Gartner’s MQ.

Key values/differentiators:

  • Microsoft integrated its Application Insights tool into Azure Log Analytics to form the new Azure Monitor in 2018. This SaaS-based multitenant APM solution is deeply integrated with Microsoft Azure. It includes DEM, log analytics and cloud-native monitoring for Azure, with support for containers and Kubernetes.
  • The APM solution offers strong integration with Microsoft development tools. It also offers analytics tools that are designed to accommodate large volumes of events, logs, metrics, transactions and security.
  • Microsoft offers a consumption-based pricing model that Gartner describes as a “competitive differentiator.”
  • The roadmap for Microsoft APM revolves around adding algorithms to reduce event noise, improve forecasting performance and detecting anomalies. The vendor also plans to expand support for geographic regions.

New Relic

Value proposition for potential buyers: The vendor offers APM only as a SaaS solution. It is designed to work with cloud platforms such as AWS, Azure and Google Cloud. It supports Kubernetes containers and microservices monitoring, as well as business-centric analytics, infrastructure monitoring and distributed tracing capabilities. Gartner ranked New Relic as a “Leader” in its MQ.

Key values/differentiators:

  • The vendor is known for delivering a robust UI and strong workflow capabilities. It is among the easiest and quickest APM solution to deploy and deliver results. It includes a powerful auto-instrumentation feature that supports major programming languages.
  • Several acquisitions have strengthened the firm’s position in recent months. In addition, New Relic invests heavily in R&D and plans to continue upgrading the platform. This includes stronger root cause analysis features, faster and better anomaly detection and event correlation, and faster incident response capabilities.
  • A February 2019 acquisition of analytics vendor SignifAI added machine learning and AI features to the platform.

Oracle

Value proposition for potential buyers: Oracle is a long-time provider of APM tools. It offers an on-premises solution through Oracle Enterprise Manager (OEM) and a SaaS solution through Oracle Management Cloud (OMC).  The latter platform is a multitenant framework that addresses APM requirements across applications, infrastructure and end-user monitoring environments. Oracle is ranked as a “Challenger” in the Gartner MQ.

Key values/differentiators:

  • Although OMC is optimized for Oracle infrastructure and workloads, it can be used for APM within heterogeneous environments. The solution is able to collect log and metrics data from numerous external sources.
  • The OMC APM solution is available in different configurations that are designed to address different customer needs. This includes analytics requirements and the level of orchestration required to oversee infrastructure.
  • Oracle is expanding the platform to include support for additional modern programming languages through OpenTracing. It is incorporating more robust features and capabilities through proprietary and open source tools.

Riverbed

Value proposition for potential buyers: Riverbed is a long-time provider of APM solutions. It offers several products that address different enterprise requirements. These include both on-premises and SaaS-based solutions for monitoring, analyzing and addressing anomalies and various other challenges. Gartner ranks Riverbed as a “Challenger” in its MQ.

Key values/differentiators:

  • AppInternals is Riverbed’s core APM solution. It offers agent-based, bytecode instrumentation and integrates with several other of the vendor’s products to deliver more comprehensive infrastructure monitoring and application and user management.
  • Gartner gives Riverbed high marks for delivering a consistent user experience across both on-premises and SaaS products. It also praised the company for its use of closely coupled DEM and NPMD functions, and for its ability to handle large volumes of data effectively.
  • Riverbed is working to better unify agents and features within its various APM tools and products, along with microservices and containers. In addition, it is adding support for modern languages.

SolarWinds

Value proposition for potential buyers: The vendor entered the APM market in 2016, after acquiring the assets of AppNeta. It offers powerful tools that span IT networks, infrastructures and applications. These include host agents, SNMP polling and application dependency mapping. The vendor’s solutions are available in both on-premises and SaaS-based versions. The on-premises solution is called Server & Application Monitor (SAM) and the SaaS product is named AppOptics. Gartner ranked the company a “Niche Player” in its MQ.

Key values/differentiators

  • The vendor has added numerous features and capabilities to its AppOptics solution over the last couple of years, including support for containers such as Docker and Kubernetes. It supports code-level instrumentation and infrastructure monitoring.
  • SolarWinds is bolstering analytics capabilities and adding machine learning tools that are designed to automate processes and reduce complexity. This includes combining tracing, metrics, logging and end-user data into a unified workflow, and adding time-series prediction and classification features.
  • The vendor’s tools are designed primarily for small and mid-size organizations. They are ideal for businesses looking to deploy a solution based on a simpler consumption model. SolarWinds is known for solutions that are intuitive and straightforward.

Top Application Performance Management Comparison Chart

Vendor

 

Focus

 

Key Differentiator

 

Key features

 

Broadcom (CA Technologies)

 

 

On premises and SaaS APM that revolves around actionable analytics to improve digital experiences.

 

Offers a digital Experience Insights (DXI) platform that addresses infrastructure, network, cloud, end-users, and business transaction monitoring.

 

Analytics tools; assisted triage workflow that offers drill-down visibility into issues.

 

Cisco (AppDynamics)

 

 

Offers on premises and SaaS APM with monitoring, diagnostics and analytics.

 

Addresses a wide array of APM requirements, including mainframes, clouds, and SAP S/4HANA.

 

Supports business and IT metrics; integrated machine learning and powerful cloud tools.

 

Dell Foglight (Quest)

 

 

A SaaS-based APM solution that risk assessment, diagnostics, user management and server monitoring for online environments.

 

The Quest platform provides tools that consolidate and standardize database performance management across diverse multi-platform environments.

 

Database monitoring; performance optimization; advanced workload analytics.

 

Dynatrace

 

 

APM available on-premises, as a managed service or as a SaaS solution.

 

OneAgent architecture supports a wide array of environments, from mainframe to COTS and SaaS through a combination of legacy and newer technology infrastructure and tools.

 

Offers real-time topology and AI algorithms that automatically detect anomalies and understand the business impact across users, applications, and infrastructure.

 

IBM

 

 

SaaS-based multi-tenant APM solution.

 

Includes a web-based UI and configurable dashboard that monitors AWS, Azure and other cloud environments using cloud-native APIs. Offers strong AI capabilities.

 

RUM synthetic transactions monitoring, log analytics, middleware monitoring, multivariate anomaly detection, and business insight.

 

Microsoft

 

 

Full APM support available through a SaaS solution. Basic APM functionality though System Center Operations Manager.

 

The solution works with Azure and supports .NET and Java, along with apps written in Python, Go and Node.js.

 

Includes DEM, log analytics and cloud-native monitoring for Azure, with support for containers and Kubernetes.

 

New Relic

 

 

SaaS-based APM.

 

Auto-instrumentation framework supports cloud platforms such as AWS, Azure and Google Cloud with an excellent UI and strong workflow features.

 

Dashboard provides deep visibility into applications and performance. Includes machine learning and AI features.

 

Oracle

 

 

Offers on-premises and SaaS-based APM through different applications.

 

Oracle Management Cloud handles Oracle workloads as well as heterogeneous frameworks. The solution is available in different versions for different specific purposes.

 

Strong analytics and orchestration features. Growing support for modern programming languages and open source tools.

 

Riverbed

 

 

Offers on-premises and SaaS APM.

 

Riverbed’s core APM solution, AppInternals, delivers agent-based, bytecode instrumentation and deep integration with the vendor’s other products and tools.

 

Offers coupled DEM and NPMD functions; supports large volumes of data through comprehensive infrastructure monitoring and application and user management.

 

SolarWinds

 

 

Offers on-premises and SaaS solutions.

 

Offers powerful tools that span IT networks, infrastructures and applications. These include host agents, SNMP polling and application dependency mapping.

 

The SaaS solution, AppOptics, supports Docker and Kubernetes, along with code-level instrumentation and infrastructure monitoring.

 

]]>
Top 10 Hyperconverged Infrastructure (HCI) Solutions https://www.datamation.com/data-center/top-10-hyperconverged-infrastructure-hci-solutions/ Tue, 22 Dec 2020 14:48:41 +0000 https://datamation.com/?p=20403 hyperconverged infrastructure (HCI) solution is a primary tool for connecting, managing and operating interconnected enterprise systems in a hyperconverged infrastructure (HCI). The technology helps organizations virtualize storage, servers, and networks. While converged infrastructure uses hardware to achieve this objective, HCI takes a software-centric approach.

To be sure, hyperconvergence has its pros and cons. Yet the advantages are clear: HCI boosts flexibility by making it easier to scale according to usage demands and adjust resources faster and more dynamically. By virtualizing components it’s possible to build more efficient databases, storage systems, server frameworks and more. HCI solutions increasingly extend from the data center to the edge. Many also incorporate artificial intelligence and machine learning to continually improve, adapt and adjust to fast-changing business conditions. Some also contain self-healing functions.

By virtualizing an IT environment an enterprise can also simplify systems management and trim costs. This can lead to a lower total cost of ownership (TCO).  Typically, HCI environments use a hypervisor, usually running on a server that uses direct-attached storage (DAS), to create a data center pool of systems and resources. Most support heterogenous hardware and software systems. The end result is a more flexible, agile and scalable computing framework that makes it simpler to build and manage private cloudpublic clouds and hybrid clouds.

How to Select the Right HCI Solution

A number of factors are important when evaluating HCI solutions. These include:

Edge-core cloud integration. Organizations have vastly different needs when it comes to connecting existing infrastructure, clouds and edge services. For instance, an organization may require only the storage layer in the cloud. Or it may want to duplicate or convert configurations when changing cloud providers. Ideally, an HCI solution allows an enterprise to change, upgrade and adjust as infrastructure needs change.

Analytics. It’s crucial to understand operations within an HCI environment. A solution should provide visibility through a centralized dashboard but also offer ways to drill down into data, and obtain reports on what is taking place. This also helps with understanding trends and doing capacity planning.

Storage management. An HCI solution should provide support for setting up and configuring a diverse array of storage frameworks, managing them and adapting them as circumstances and conditions change. It should make it simple to add nodes to a cluster and support things like block file and object-oriented storage. Some systems also offer NVMeOF (non-volatile memory express over fabrics) support, which allows an enterprise to rearchitect storage layers using flash memory.

Hypervisor ease of use. Most solutions support multiple hypervisors. This increases flexibility and configuration options—and it’s often essential in large organizations that rely on multiple cloud providers. But it’s important to understand whether you’re actually going to use this feature and what you plan to do with it. In many cases, ease of use and manageability are more important than the ability to use multiple hypervisors.

Data protection integration. It’s important to plug in systems and services to protect data—and apply policy changes across the organization. It’s necessary to understand whether this protection is scalable and adaptable, as conditions change. Ideally, the HCI environment can replace disparate backup and data recovery systems. This greatly improves manageability and reduces costs.

Container support. A growing number of vendors support containers, or plan to do so soon. Not every organization requires this feature, but it’s important to consider whether your organization may move in this direction.

Serverless support. Vendors are introducing serverless solutions that support code-triggered events. This has traditionally occurred in the cloud but it’s increasingly an on-premises function that can operate within an HCI framework.

Here are ten leading HCI solutions:

Jump to:

Leading Hyperconverged Infrastructure Solutions

Cisco HyperFlex HX-Series

The Cisco HyperFlex HX data platform manages business and IT requirements across a network. The solution accommodates enterprise applications, big data, deep learning and other components that extend from the data center to remote offices and out to retail sites and IoT devices. The platform is designed to work on any system or any cloud.

Pros

  • The platform includes hybrid, all-flash, all-NVMe, and edge configurations to deliver maximum flexibility and a high level of security, including self-encrypting options.
  • It relies on an integrated network fabric, and powerful data optimization features to deliver hyperconvergence to a wide range of workloads and use cases.
  • HyperFlex HX is highly scalable.
  • The technology supports deep learning on GPU-only nodes.

Cons

  • Requires an integrated Cisco network.
  • Some users find the pricing model confusing and somewhat high.
  • Limitations with analytics.
  • Systems configurations and manageability can present challenges.

 DataCore Software-Defined Storage

Datacore SDS delivers a highly flexible approach to HCI. It offers a suite of storage solutions that accommodate mixed protocols, hardware vendors and more within converged and hyperconverged SAN environments. The software-defined storage framework, SANsymphony, features block-based storage virtualization. It is designed for high availability. The vendor focuses heavily on healthcare, education, government and cloud service providers.

Pros

  • Supports mixed SAN, flash and disk environments.
  • Handles load balancing and policy management across heterogeneous systems.
  • Offers pool capacity and centralized control of primary and secondary storage.
  • Strong failover capabilities.

Cons

  • Some find the user interface daunting.
  • Licensing can be somewhat complex, though the vendor has introduced capacity-based licenses.
  • Some users report difficulties obtaining adequate customer support.

 Dell/EMC VxRail

VxRail delivers a fully integrated, preconfigured, and pre-tested VMware hyper-converged infrastructure appliance. It delivers virtualization, compute and storage within a single appliance. The HCI platform takes an end-to-end automated lifecycle management approach.

Pros

  • Delivers a single point of support by default for all software and hardware.
  • Cloud based multi-cluster management and intelligent upgrade staging.
  • Strong Kubernetes support.
  • Offers a lockstep 30-day synchronous release with VMware vSphere
  • Users report low total cost of ownership

Cons

  • Limited support for mixing older flash clusters and hyper-clusters.
  • Users report some manageability challenges, such as setting up naming schemas.
  • Can be somewhat pricey, depending on the IT environment and use case.

 HPE SimpliVity

HP Enterprise aims to take hyperconverged architectures beyond the realm of software-defined and into the world of AI-driven with SimpliVity. The HCI platform delivers a self-managing, self-optimizing, and self-healing infrastructure that uses machine learning to continually improve. HP offers solutions specifically designed for data center consolidation, multi-GPU image processing, high-capacity mixed workloads and edge environments.

Pros

  • Offers strong storage management, backup and data replication capabilities.
  • Offers a single well-designed interface for the entire solution.
  • Strong partner relationships, including SAP, Microsoft, Citrix, VMware and Docker.
  • Highly scalable and flexible without a penalty for availability.

Cons

  • Some users encounter difficulties moving SimpliVity clusters within the platform.
  • Can be pricey, depending on the use case.
  • Some users complain about the lack of customer and technical support.

 NetApp HCI

NetApp HCI consolidates mixed workloads while delivering predictable performance and granular control at the virtual machine level. The solution scales compute and storage resources independently. It is available in different compute and storage configurations, thus making it flexible and scalable across data center, cloud and web infrastructures.

Pros

  • Delivers strong manageability, granular controls and a high level of flexibility for HCI within a single pane of glass.
  • Automates numerous functions with a strong API framework and ecosystem.
  • Handles numerous types of workloads, including VMware, SQL, Oracle, SAP, Citrix and Splunk.
  • Highly scalable.

Cons

  • Installation and initial cabling can be challenging.
  • Users complain that documentation is sometimes lacking.
  • Some users complain about inadequate security controls and the lack of integration with other security solutions.

 Nutanix AOS

Nutanix offers a fully software-defined hyperconverged infrastructure that provides a single cloud platform for tying together hybrid and multi-cloud environments. Its Xtreme Computing platform natively supports compute, storage, virtualization and networking—including IoT—with the ability to run any app at scale. It also supports analytics and machine learning.

Pros

  • Offers a feature-rich platform that can be applied at scale. The platform is especially adept at handling data compression and deduplication.
  • Strong and easy-to-use management capabilities through a single user interface.
  • Provides automated application management in a full-cloud stack.
  • Users report excellent technical support

Cons

  • Among the more expensive solutions on the market.
  • Users report some problems with complexity and using networking functions, including encryption and micro-segmentation.
  • Users report some difficulties integrating older legacy systems with the HCI environment.

 StarWind HyperConverged Appliance

StarWind offers a HCI appliance focused on both operational simplicity and performance. It bills its all-flash system as turnkey with ultra-high resiliency. The solution, designed for SMB, ROBO and enterprises—aims to trim virtualization costs through a highly streamline and flexible approach. It connects commodity servers, disks and flash; a hypervisor of choice; and associated software within a single manageable layer.

Pros

  • The appliance is highly scalable. It supports numerous disks and flash components, and easily scales by adding extra nodes.
  • It offers attractive pricing and low TCO.
  • The vendor’s ProActive support framework spots abnormalities and anomalies through persistent monitoring and machine learning.

Cons

  • The vendor’s Linux interface isn’t as developed and mature as the Windows interface it offers.
  • Some complaints from users about the interface and manageability functions.
  • Users say documentation could be more complete.

StarWind Virtual SAN

StarWind Virtual SAN is essentially a software version of the vendor’s HyperConverged appliance. It eliminates the need for physically shared storage by “mirroring” internal hard disks and flash between hypervisor servers. The approach is designed to cut costs for SMB, ROBO, Cloud and Hosting providers. Like the vendor’s appliance, StarWind Virtual SAN is a turnkey solution.

Pros

  • Offers a powerful control panel with insight into the status and health of the VSAN.
  • Uses data locality and server-side caching to deliver high performance and fault tolerance.
  • Delivers low overhead and maintenance costs.
  • Users praise the vendors ProActive support, which spots abnormalities and anomalies through monitoring and machine learning.

Cons

  • Some users complain that the licensing framework can be difficult and somewhat restrictive.
  • Lacks some features required for larger enterprises with more complex configurations.
  • PowerShell documentation presents challenges for some users.

 VMware vCenter Server

The vCenter Server delivers centralized visibility as well as robust management functionality at scale. The HCI solution is designed to manage complex IT environments that require a high level of extensibility and scalability. It includes native backup and restore functions. vCenter supports plug-ins for major vendors and solutions, including Dell EMC, IBM and Huawei Technologies.

Pros

  • vCenter can manage up to 70,000 virtual machines and 5,000 hosts across up to 15 vCenter Server instances.
  • Offers templates and RESTful APIs to automate set up simplify deployments.
  • Includes machine learning capabilities.
  • Users praise VMware for streamlined setup, ease of use and performance.

Cons

  • Some users find the user interface confusing and difficult.
  • A faster HTML5 interface lacks key functionality found in the vendor’s Flex interface.
  • Kubernetes functionality only works in the cloud.
  • The solution can be pricey. Licensing is typically suitable only for medium and large enterprise.

 VMware vSAN

vSAN is an enterprise-class, storage virtualization solution that manages storage on a single software-based platform. When combined with VMware’s vSphere, an organization can manage compute and storage within a single platform. The solutions connects to a broad ecosystem of cloud providers, including AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.

Pros

  • Offers powerful features, scales well and delivers excellent flexibility.
  • Excellent user interface.
  • Integrates seamlessly with VMware products but also with numerous partners.
  • vSAN manages all storage functionality. It eliminates the need for additional storage support.

Cons

  • Users cite occasional problems with failure protection and rebalancing components.
  • Upgrades and changes can present challenges.
  • Expensive relative to other solutions on the market. In many cases, the platform requires licenses for multiple VMware components in order to operate.

Stream Analytics Software Comparison Table

Analytics Vendor Pros Cons
Cisco HyperFlex HX-Series

· Supports numerous configurations and use cases

· Highly scalable

· Supports GPU-based deep learning

· Requires Cisco networking equipment

· Pricing model can be confusing

· Some users find manageability difficult

 

DataCore Software-Defined Storage

· Supports mixed SAN, flash and disk environments

· Excels with load balancing and policy management

· Strong failover capabilities

· User interface can be daunting

· Licensing can become complex

· Customer support is inconsistent

Dell/EMC VxRail

· Delivers a true single point of management and support

· Handles multi-cloud clusters well

· Integrates well with storage devices

· Low TCO

 

· Limited support for mixing older flash clusters and hyper-clusters

· Some management challenges

· Sometimes pricey

 

HPE SimpliVity

· Strong storage management, backup and data replication capabilities

· Users like the interface

· Strong partner relationships

· Highly scalable

· Managing clusters can present challenges

· Pricey

· Users cite problems with technical and customer support

NetApp HCI

· Excellent manageability with granular controls

· Strong API framework

· Support for numerous workloads from different vendors

· Highly scalable

 

· Installation and initial cabling can be difficult

· Documentation sometimes lacking

· Users say some security features and controls are missing

 

Nutanix AOS

· Feature-rich platform

· Single user interface with strong management tools

· Users report excellent tech support

· Pricey

·  Users report some complexity with using encryption and micro-segmentation

· Can be difficult to integrate with legacy systems

StarWind HyperConverged Appliance
 

· Highly scalable

· Supports numerous configurations and technologies

· Users report low TCO

· Strong vendor support through always-on monitoring and machine learning

· Linux interface isn’t as mature as the Windows interface

· Some find the interface difficult

· Users say documentation is sometimes lacking

 

StarWind Virtual SAN
 

·  xcellent control panel

· High fault tolerance

· Low overhead and maintenance costs

· Strong vendor support through always-on monitoring and machine learning

· Licensing framework can be difficult and restrictive

· Lacks some features important for large enterprise

· PowerShell documentation can be challenging

VMware vCenter Server
 

· High capacity

· Strong APIs

· Machine learning features

· High performance

 

· Interface can present challenges

· Kubernetes works only in the cloud

· Pricey

 

VMware vSAN

· Powerful features

· Highly scalable and flexible

· Integrates with numerous partners

· Consolidates storage support

· User cite problems with failure protection and rebalancing

· Upgrade may present problems

· Pricey. Requires multiple licenses from VMware for various needed modules

 

]]>
Best Stream Analytics Software https://www.datamation.com/big-data/best-stream-analytics-software/ Thu, 17 Dec 2020 14:54:38 +0000 https://datamation.com/?p=20406 Stream analytics software analyzes current and historical data as it travels across networks, into and out of databases and through application programming interfaces (APIs).

As a key component of data analytics, this ability to monitor and understand data in real time is at the center of today’s digital enterprise. But achieving success is a growing challenge, particularly as the volume of data grows. Crucial to the success of any Big Data project, stream analytics monitors events and information exchanges in real time. They provide alerts and notifications when certain conditions take place.

As a result, stream analytics software are useful for a wide array of enterprise data tasks. These include geospatial analysis, understanding social media streams, tying together telemetry data from IoT devices, predictive analytics, spotting fraud, real-time point of sale and inventory analysis, and remote monitoring and maintenance tasks.

Some tools offer visualization features that allow users to view complex relationships among systems, connected devices and various types of data. Many rely on widely used frameworks such as Apache Kafka, SQL and JavaScript. The common theme for all stream analytics systems is that their data processing engines are designed to handle enormous volumes of data streaming from multiple sources simultaneously. Stream analytics is particularly powerful when it operates in the cloud.

How to Select the Best Stream Analytics Software For Your Company

There are several crucial factors to consider when selection a stream analytics platform. These include:

Compatibility:It’s critical to survey enterprise data sources, map out connection points for applications and systems, and thoroughly understand what data streams are important—and for what purposes. Building an end-to-end pipeline requires support for coding languages, database formats and more.

Features: An organization should ensure that the package offers the right set of features, and they are robust and flexible enough to provide optimal results. Key features frequently include visualization dashboards, rich reporting capabilities, integrated development tools, data preparation and enrichment capabilities, and automation through machine learning.

Performance and reliability: Not only must a stream analytics software package operate with ultra-low latency, it has to provide the flexibility and scalability to add, subtract and change inputs and connections points—including message brokers and outside processing engines. Some packages also have built-in recovery capabilities.

Cost: It’s wise to view stream analytics in the context of total cost of ownership. Some packages now operate on a utility model—you pay for what you use and the streaming units you consume. Others use a more conventional licensing approach.

Security and compliance:Look for a package that incorporates incoming and outgoing encryption, as well as processing in memory so that data isn’t stored with a cloud provider. Equally important: ensure that a package adheres to all regulatory and compliance standards. Look for compliance certifications.

Top Stream Analytics Software

Here are ten leading stream analytics solutions to consider:

Jump to:

Amazon Elasticsearch Service

The managed service delivers a straightforward way to deploy, operate and scale Elasticsearch clusters in AWS cloud. It provides direct access to the Elasticsearch APIs so that existing code and applications work seamlessly with the service. The platform offers an open-source search and analytics engine that focuses on use cases such as log analytics, real-time application monitoring, and clickstream analysis.

Pros

  • Users can set up and configure a domain in minutes. It supports programmatic access through AWS CLI or the AWS SDKs.
  • The platform offers a high level of scalability, including support for numerous CPU, memory and storage configurations.
  • Offers up to 3 PB of attached storage.
  • Provides strong security, including identity and access controls; encryption of data at rest and in motion; index-level, document level and field-level security; and audit logs.

Cons

  • The platform may present a formidable learning curve.
  • Search queries and indexing can be difficult. If these processes aren’t set up correctly they can impact performance and results.
  • Some users find the interface daunting and have trouble customizing the service to the extent they desire.

Amazon Kinesis

The platform collects and processes large data streams in real-time. Users can create applications that read data as records. These applications use the Kinesis Client Library to run Amazon EC2 instances. Kinesis supports dashboards, dynamic alerts, dynamic pricing and advertising strategies, along with many other functions. It supports data management across other AWS services.

Pros

  • Supports live metrics and reporting.
  • Accommodates complex stream processing, including aggregating multiple streams. This allows more robust downstream processing.
  • Kinesis offers a flexible approach, including support for data sources pushing data directly into a stream.


Cons

  • Can present a formidable learning curve. Some users also report difficulty with documentation.
  • Can be costly and require significant input if an organization has a large number of data sources and requires a larger number of shards.
  • Some users report that extended fan-outs are difficult to manage.

Azure Event Hubs

Microsoft bills Azure Event Hubs as a “scalable event processing service that ingests and processes large volumes of events and data, with low latency and high reliability.” The big data streaming platform and event ingestion service processes millions of events per second, typically in an Azure cloud. It delivers low latency and strong integration with connected data sources.

Pros

  • Uses the Kafka protocol to configure existing Apache Kafka applications to talk to Event Hubs. It also supports .NET, Java, Python, JavaScript.
  • Azure Event Hubs is a highly scalable framework that can extend to terabytes. An auto-inflate feature simplifies and streamlines scaling.
  • Strong support for telemetry sharing, user telemetry processing and strong transaction processing, with live dashboards.

Cons

  • The platform may require custom coding to support more advanced functionality.
  • Some users report difficulty with the interface and find the learning curve difficult.
  • Non-Azure cloud users may face increased difficulty using certain functions, including scheduling.

Azure Stream Analytics

The solution relies on a complex event processing engine to ingest high volumes of data from diverse sources in real-time. It extracts data from devices, sensors, clickstreams, social media feeds, and enterprise applications. This makes it ideal for numerous scenarios, ranging from fleet management and predictive maintenance to point of sale and IoT.

Pros

  • The platform can run in the cloud or on the intelligent edge. It uses the same tools and query language for both.
  • Azure Stream Analytics delivers a high level of configurability and scalability.
  • It integrates seamlessly with various Azure services and adds them to menus automatically.

Cons

  • Doesn’t support auto-scaling. Users must configure streaming units manually.
  • Some users report crashes when the service encounters invalid and malformed data sets.
  • Lacks some of the advanced features required for more advanced IoT implementations.

Confluent

The vendor offers both fully managed and self-managed service options within an open-source framework. A SQL base allows user to build streaming analytics applications that monitor and manage data and events in real-time. The platform ties into the Apache Kafka ecosystem to support highly complex tasks across numerous industries and business environments.

Pros

  • Provides powerful tools, features and capabilities to manage Kafka clusters.
  • Strong end-to-end visibility and manageability.
  • Powerful scaling functions due to numerous built-in connectors.

Cons

  • A complex platform that can present learning challenges.
  • Some users report difficulty testing within the platform.
  • Users say that role-based-access controls (RBAC) present some challenges.

Google Cloud Pub/Sub

The asynchronous messaging service is designed to decouple services that produce events from services that process events. It’s frequently used as messaging-oriented middleware or for event ingestion and delivery for streaming analytics pipelines.

Pros

  • High availability and consistent performance at scale.
  • Ease of configuration and a high level of flexibility.
  • Strong functionality along with tight integration with numerous other products and data services.

Cons

  • Some find the user interface confusing and have difficulty managing certain features.
  • Can be pricey for certain types of implementations and use cases. Some users find the pricing framework confusing.
  • Can be difficult to use without customizations.

IBM Streaming Analytics

IBM Streaming Analytics is equipped to analyze and correlate a broad range of streaming data, including unstructured text, video, audio, geospatial, and sensor data. It features real-time analysis of data in motion. It can analyze millions of events per second, enabling sub-millisecond response times. It’s available with IBM Cloud Pak for Data-as-a-Service.

Pros

  • Receives high user ratings for capacity, flexibility and scalability.
  • Easy to integrate with other IBM cloud services.
  • Offers a large set of optimized and tested toolkits.
  • Active developer community that contributes packages and solutions.

Cons

  • Numerous users report that documentation is sometimes lacking.
  • Can be expensive to operate in production environments.
  • Some users find it challenging to write complex business rules into the platform.

Kibana

The open-source data visualization dashboard is designed to handle Elasticsearch data and navigate the elastic stack. It reaches across documents and data sets to deliver numerous visualization formats, including histograms, line graphs, pie charts, and sunbursts. It also accommodates location analysis, time series models and machine learning.

Pros

  • Kibana is flexible and supports a high level of customization, including custom actions.
  • Delivers role-based and highly granular access controls.
  • Offers robust dashboards and built-in drill-down features that allow users to explore data in deeper ways.


Cons

  • The platform trails competitors for ease of setup and use.
  • Can consume a high level of computing resources in certain situations.
  • Search filters and notifications can be limited within certain scenarios.

Lenses

Lenses offers a developer workspace for building and operating real-time applications on Apache Kafka Connect and Kubernetes infrastructure. The platform is available on premises and in the cloud. It supports SQL-based real-time applications with centralized schema management.

Pros

  • Lenses receives high marks from users for ease of use and quality of support.
  • It offers a secure portal that allows users to configure, deploy and manage hundreds of Kafka Connect-compatible connectors, with integrated error handling.
  • Includes Google-like search and automatic data discoverability of data entities and metadata generated by your real-time applications.

Cons

  • Trails other stream analytics vendors for ease of setup.
  • Some users would like to see a richer set of features and capabilities for supporting DataOps.

TIBCO Streaming

TIBCO Streaming delivers real-time enterprise-grade streaming analytics that reaches across the organization and out to the IoT. The cloud-ready solution supports the development of affordable real-time applications. It analyzes millions of events per second and provides ultra-fast continuous querying capabilities.

Pros

  • Handles highly complex data transformations.
  • Offers powerful user controls to manage ad-hoc queries, control and set business logic, define rules and models, configure charts, change the panel layout, create and manage alerts, and more.
  • Offers more than 150 pre-built adapters and visualization options for Kafka and numerous other formats.
  • Delivers full cloud-enablement with support for Docker.

Cons

  • Some users report a lack of support for managing third party library dependencies.
  • Users report that security controls could be more robust.
  • May lack flexibility for certain configurations, such as reusing modular components.

Stream Analytics Software Comparison Table

Analytics Vendor Pros Cons
Amazon Elasticsearch Service

· Fast Setup

· Highly scalable

· Large storage capacity

· Strong security controls

· Learning curve

·  Queries and indexing can be difficult

·  Interface can be challenging

 

Amazon Kinesis

· Supports live metrics and reporting

· Handles highly complex stream processing

· Flexible

· Learning curve

· Can be pricey

· Large implementations and fanouts can be difficult to manage

Azure Event Hubs

· Strong support for Kafka and development languages

· Highly scalable with large capacity

·  Strong telemetry support

 

· Advanced functionality may require custom coding

· Interface can be daunting

· Limited functionality for non-Azure users

 

Azure Stream Analytics

· Runs in the cloud or on the edge

· High level of configurability and scalability

· Integrates seamlessly with other Azure modules and services

· Lacks auto-scaling functionality

· Crash-prone when it encounters invalid and malformed data

· Lacks advanced features required for IoT projects

Confluent

·  Powerful features and tools

·  Strong end-to-end visibility and manageability

· Powerful scaling functions

 

·  Complexity of platform

·  Testing within the platform may be challenging

·  RABCs can prove difficult

 

Google Cloud Pub/Sub

·  High availability and consistent performance at scale

·  Ease of configuration

·  Flexible

·  Robust functionality

· Expensive for certain uses and configurations

· May require additional customization

· Some find the user interface difficult

IBM Streaming Analytics

 

· Excels in capacity, flexibility and scalability

·  Tight integration with other IBM cloud services

·  Robust toolkit

·  Active developer community

· Users report documentation is subpar

·  Can be challenging to operate in production environments

·  Doesn’t always support complex business rules

 

Kibana

 

·  Highly flexible and customizable

·  Strong role-based access controls

·  Excellent dashboards and drill down capabilities

·  Can be difficult to set up and use

·  May heavily consume computing resources

·  Limited search filters and capabilities within certain scenarios

Lenses

 

·  Ease of use

·  Strong support

·  Robust management portal

·  Strong search capabilities

 

· Setup can be challenging

· Users say they would like to see richer features for DataOps

 

Tibco Streaming

·  Handles highly complex data transformations

·  Strong support for business logic and user controls

·  Full cloud enablement with Docker support

·  Lacks support for third party dependencies

·  Users report that some security features lacking

· Lacks flexibility for reusing modular components

 

]]>
Top AIOps Companies https://www.datamation.com/artificial-intelligence/top-aiops-companies/ Thu, 05 Nov 2020 17:10:17 +0000 https://datamation.com/?p=20421 Artificial intelligence for IT operations (AIOps) taps artificial intelligence (AI) to streamline and simplify information technology (IT) management. The technology collects data across increasingly complex IT infrastructure, identifying key patterns and events, and automating problem resolution. AIOps platforms typically relies on advanced analytics and machine learning tools to identify the root cause of issues and problems—and address them without human involvement.

In recent years, as data analytics has exploded and cloud computing has become commonplace, AIOps has gone mainstream. By 2023, 40% of DevOps teams will augment application and infrastructure monitoring tools with artificial intelligence for IT operations (AIOps) platform capabilities, Gartner’s research noted. Currently, Gartner estimates the size of the AIOps platform market at between $300 million and $500 million per year.

The appeal of AIOps is straightforward. Gartner points out that these platforms enhance “decision making by contextualizing large volumes of varied and volatile data.” However, it also noted that while the space is advancing rapidly and adoption remains on the upswing, “AIOps platform maturity, IT skills and operations maturity are the chief inhibitors to rapid time to value.”

The upshot? It’s critical to understand what your business needs are and what value proposition vendors offer before committing to an AIOps platform.

Jump to:

How to Choose an AIOps Company

If you’re in the market for an AIOps solution, here are some things to consider:

  • A starting point for choosing a vendor and a specific solution is understanding how your current IT infrastructure can benefit from AIOps and what use case serves as a good starting point for replacing rules-based analytics with an automated framework of network diagnostics.
  • Two general categories of AIOps exist: domain-centric platforms with built-in monitoring tools and domain-agnostic stand-alone solutions. Each has tools for ingesting events, metrics and traces. Understanding which delivers bigger benefits can clarify the vendor-selection process.
  • It’s important to select a solution that has business-specific IT service management (ITSM) use cases revolving around task automation, knowledge management and change analysis.
  • Successful implementations enable insights across IT operations management (ITOM) through three crucial aspects of AIOps, Gartner reports. These include observe, engage and act. Ensure that your organization understands how a solution fits—and connects to other tools—before finalizing vendor selection.

Top AIOps Companies

Here are 10 of the top vendors in the AIOps arena, along with some of their top features and selling points.

AppDynamics

Value Proposition: AppDynamics Central Nervous System ranks high among AIOps vendors with its broad and deep views into networks. Its parent company is Cisco Systems, though the solution works across numerous systems and frameworks. Top customers include Alaska Airlines, Paychex and Nasdaq. Gartner ranked AppDynamics among the “Leaders” in its 2020 Magic Quadrant for Application Performance Monitoring. It also ranked as a “Leader” on the G2 Grid for AIOps Platforms and earned 4.2 out of 5 stars at G2 user ratings.

Summary: Central Nervous system focuses on three primary tasks: visibilityinsights and action. It incorporates a cognition engine that delivers cross-domain visibility, insights and automation—along with automated anomaly detection and root cause analysis. This aids in reducing mean time to resolution (MTTR). A serverless APM shows relationships among applications, and promotes deep integrations across numerous partners. This allows users to gain an expansive view of application code and the underlying network. Cisco ACI and AppDynamics integration delivers insights into cloud infrastructure, including network-configured policies and automated security enforcement.

BigPanda

Value Proposition: The vendor has emerged as a popular choice in the AIOps space, with customers such as InterContinental Hotels Group, Foot Locker, United Airlines and Staples. It recently introduced what it describes as the “first Event Correlation and Automation platform powered by AIOps.” It focuses on gleaning insights and resolving IT issues across the entire IT stack and generating unified analytics. BigPanda received 4.1 out of 5 stars by users at G2.

Summary: BigPanda approaches AIOps through a “monitoring, change, and topology” framework that is part of an overall ITSM framework. It uses proprietary “open box” machine learning technology to spot, correlate and resolve problems. Key AIOps capabilities and features include: Open Integration Hub that collects, normalizes and enriches monitoring; Open Box Machine Learning; an operations console that handles bi-directional integrations; and unified analytics. The company claims that its machine learning component reduces noise by 95% or more while nearly eliminating false positives.

BMC

Value Proposition: BMC is a leading player in the AIOps space. It offers several products that map, log and manage IT infrastructure—and it has established partnerships with most major players in networking and clouds. The company’s open data access approach taps multiple data sources for historical and streaming data. Customers include Ingram, Boston Scientific, Carfax, Lockheed Martin and Vodafone. Its products rate receive good to excellent ratings from users at G2 and other rating sites.

Summary: The vendor’s Helix Monitor is an end-to-end service and operations platform that operates under as SaaS model and uses a data agnostic approach. The solution relies on a containerized, microservices architecture with open APIs and customizable dashboards. It is designed to provide broad monitoring and event management with integrated ITSM and ITOM. The vendor claims that its AIOps solutions reduce noise by about 90%, trim time to identify root cause by 60%, and slash event remediation MTTR by 75%. The company offers other tools, including TrueSite Operations Management, which taps machine learning (ML) and advanced analytics for more holistic monitoring and event management.

DataDog

Value Proposition: DataDog is a SaaS platform that delivers real-time application and IT monitoring along with log management and automation. It boasts major customers such as Peloton, 21st Century Fox, Samsung and Whole Foods Market. The company was ranked as a Forrester Wave Leader in the 2019 in the Intelligent Application and Service monitoring category. It’s also ranked as a “Leader” in the G2 Grid. The vendor receives 4.2 out of 5 stars at the G2 user ratings site.

Summary: The vendor supports visibility into all modern platforms and applications. It includes robust tools for monitoring, troubleshooting and optimizing performance. This includes log analysis that analyzes and explores data in context. The result is end-to-end proactive monitoring that detects and fixes performance issues through AI-powered self-maintenance tests. The platform also offers an assortment of tools to correlate frontend performance with business impact.

DynaTrace

Value Proposition: The vendor offers a full-stack and highly automated AIOps solution that includes Davis, an assistant that continually processes billions of events and dependencies in milliseconds using AI and open APIs. This allows it to identify IT problems and deliver more precise root analysis. Dynatrace AIOps customers include industry giants such as Kroger, Citrix and Experian. Gartner ranked the vendor a Magic Quadrant 2020 “Leader” for APM. The firm also ranked as a “Leader” in the G2 grid. It receives 4.5 out of 5 stars from users at G2.

Summary: DynaTrace offers several products designed to improve IT monitoring and performance. The AIOps platform, using Davis, takes an all-in-once approach that identifies precise root cause, tackles open ingestion, handles orchestration and addresses topology/dependencies across systems, including clouds and mainframes. The AIOps solution features auto discovery, advanced event analytics, anomaly detection and predictive capabilities. The AI assistant generates topology visualizations and business impact analysis data.

Moogsoft

Value Proposition: The company’s approach is built on an “advanced self-servicing AI-driven observability platform” that’s designed to deliver deep and real-time visibility into IT issues. The Moogsoft solution is designed for software engineers, developers and operations staff. Major customers include Qualcomm, Verizon Media, Fannie Mae and KeyBank. The solution is highly rated among users at G2, with 5.5 out of 5 stars.

Summary: Moogsoft provides a high level of automation for end-to-end events through its cloud-native AI and ML “Observability” platform. It collects data from numerous sources and events and correlates them through pattern discovery to deliver real-time insights. The solution is designed to identify root causes, use collaboration methods to ensure the right people receive notifications, and filter out noise and reduce alerts so that teams can tackle the most urgent matters. It delivers high automated remediation for proactive incident resolution.

New Relic

Value Proposition: New Relic focuses on applied intelligence, which aims to detect, understand, and resolve incidents faster through noise reduction and deeper insights. Major customers include American Eagle Outfitters, Hearst and H&R Block. New Relic APM receives a 4.3 out of 5-star rating from users at review site G2.

Summary: New Relic offers a comprehensive list of features for its AIOps platform. This includes availability testing, event logs, event-based notifications, performance metrics, real time monitoring, transaction monitoring, and uptime reporting. The platform offers automated anomaly detection, including highly flexible proactive detection through real-time failure warnings and deep incident intelligence. Applied intelligence offers guidance and analysis designed to speed incident resolution.

PagerDuty

Value Proposition: The company’s focus is on a single platform designed to keep digital systems running all the time and in perfect order. Cloud-native PagerDuty is built to work straight out of the box. It offers more than 370 integrations, including ServiceNow, Slack, Zendesk, AWS, Zoom and many others. Customers include American Express, BBC, Doordash and Netflix. PagerDuty is ranked as a “Leader” on the G2 Grid. It receives a 4.5 out of 5 stars at the G2 user rating site.

Summary: The platform offers powerful features, including on-call management, incident response, event intelligence and analytics. The Event Intelligence module reduces noise and directs insights to the right team for faster and better event resolution. The analytics feature uses pre-build metrics and prescriptive dashboards to deliver broader and deeper insights. The vendor boasts that data science knowledge isn’t needed.

ScienceLogic

Value Proposition: ScienceLogic Platform offers a rich array of IT infrastructure monitoring and remediation tools, including bandwidth monitoring, diagnostics, IP monitoring, real-time analytics resource management, server monitoring, SLA monitoring, uptime monitoring, and web traffic reporting. Major customers include AAA, Cisco, Kellogg’s, Telstra and the EPA. The company was ranked a “Leader” in the Forrester Wave IASM Q2 2019. It receives 4.3 out of 5 stars at the G2 review site.

Summary: The vendor focuses on a three-prong approach: see, contextualize and act. This includes powerful real-time discovery and contextualization capabilities. According to Forrester, ScienceLogic was the top-rated vendor in the intelligent application and service monitoring space for 2019. It noted that ScienceLogic is adept at “handling massive data aggregation and disparate architectures.” The vendor uses an algorithmic approach to build and search through a real-time data lake. This allows the platform to incorporate advanced automation, including run-book automation, predictive capacity allocation, and CMDB rationalization.

Splunk

Value Proposition: Splunk Enterprise collects, analyzes and acts on complex and disparate data generated by IT systems. Customers include Airbus, Dominos, Porsche and Cox Automotive. The vendor was ranked number one by Gartner in Market Share Analysis: ITOM, Performance Analysis Software, 2019 and earned 4.2 out of 5 starts at G2 user ratings.

Summary: Splunk uses machine learning, multi-site clustering and an open development platform to drive operational improvements within an organization. It boasts that it offers a data-to-everything platform designed to investigate, monitor, analyze and act. The framework ingests data from any structure, source and timescale, through AI and machine learning. It supports a broad range of users across the business as well as automated actions based on customized rules or AI-driven decision making. This promotes a framework with reduced IT complexity, 360-degree service visibility and preventative alerts with auto-remediation.

]]>
Top Metadata Management Tools https://www.datamation.com/big-data/metadata-management-tools/ Wed, 10 Jul 2019 08:00:00 +0000 http://datamation.com/2019/07/10/10-top-metadata-management-tools/

Metadata management solutions play a key role in managing data for organizations of all shapes and sizes, particularly in the cloud computing era. The need for a framework to aggregate and manage diverse sources of Big Data and data analytics — and extract the maximum value from it — is indisputable. Metadata management is designed to address this task. It provides powerful tools that put information assets to work more effectively — including ratcheting up governance and compliance while reducing risk.

Metadata management solutions oversee data across its entire lifecycle. This typically covers four primary areas: data analysis, data value, data governance, and risk and compliance. It may include enterprise metadata management (EMM), which includes the processes, responsibilities, and technology necessary – particularly your data center – so that metadata adds value across the entire company.

Metadata management solutions typically include a number of tools and features. These include metadata repositories, a business glossary, data lineage and tracking capabilities, impact analysis features, rules management, semantic frameworks, and metadata ingestion and translation.

Organizations looking to take their metadata management framework to the next level should review vendors closely and make an informed decision. Not surprisingly, while all offer powerful features, some are a better fit for a particular enterprise than others.

Tips on Selecting the Right Application Performance Monitoring Tool

  • Conduct a thorough analysis of your organization’s requirements. It should come as no surprise that different organizations in different industries require different tools, solutions, and vendors. Your organization may need to manage data more effectively so that it can put it to use in analytics or machine learning. Or it may need to establish a stronger framework for industry standards or regulatory compliance. A starting point is to identify the objectives that surround an initiative, what data should be better managed, why and where current gaps exist, and what’s required to build a bridge to a more effective metadata management strategy.
  • Carefully review different solutions and understand their strengths and weaknesses. As with any enterprise application, metadata management solutions are not created equal. Some vendors focus their products more for the regulatory concerns of financial services or healthcare while others specialize in the need to gain insights in retail or manufacturing. Some vendors excel in data cataloguing or impact analysis while others have better semantic search capabilities, business glossaries and rule management in place—or integrated machine learning and automation. Also, consider the set of partners a vendor has in place and the solution provider’s vision and roadmap for the future.
  • Take into account licensing costs, consulting fees, training, security and ancillary factors. It’s critical to understand the full cost of any software application and enterprise program before embarking on it. Metadata management is particularly tricky because it touches so many corners of the organization. It may involve an array of direct and indirect costs—including a need to adapt and better integrate IT and security. Make no mistake, vendors offer remarkably different products, business models and service delivery methods. This may result in unexpected issues and costs, including training and security.

In this Datamation article for metadata tools, we have identified 10 top vendors/tools:

Jump to:

Alation

Value proposition for potential buyers: The vendor’s Data Catalog solution delivers automated data inventory within a highly searchable catalog, along with a powerful recommendation engine. The approach is designed for both data scientists and business users. It steers clear of technical jargon and promotes a best practice approach through collaborative endorsements and warnings. Alation was ranked as a “Leader” in the 2018 Gartner Magic Quadrant (MQ) for Metadata Management Solutions.

Key values/differentiators:

  • Alation supports numerous key metadata management tasks, including data valuation, the use of active metadata and trust models for decision-making, and proprietary frameworks designed for data scientists, data analysts, business users, and others seeking information.
  • The vendor’s partnerships include many industry heavyweights, including Teradata, Tableau, MicroStrategy, Hortonworks, Cloudera, IBM, Microsoft, Vertica and Trifacta. Alation supports a wide array of use cases and offers a high level of flexibility in the way metadata is ingested and managed.
  • The platform offers rich collaborative tools that allows groups and users to share information and insights derived from raw data. This includes data about top users, column-level popularity of data, and shared data and filters. It also includes company-specific data dictionaries and wiki articles.

Alex Solutions

Value proposition for potential buyers: Alex Solutions produces a marketplace for enterprise data through a robust and highly flexible data catalog, a customizable business glossary, intelligent tagging and policy driven data quality that takes place through detailed data profiling and machine learning. The platform also offers technology agnostic metadata scanners and built-in workflows. Gartner ranked the firm a “Leader” in its 2018 Gartner MQ for Metadata Management Solutions.

Key values/differentiators:

  • The platform supports use cases and specific regulatory requirements across a wide range of industries. It delivers powerful tools for broad and deep data management through a central enterprise marketplace.
  • Alex Solutions provides a set of tools and features that are designed to appeal to different user groups across a broad ecosystem. This may include data scientists, analytics specialists, regulatory executives and teams, and security and privacy specialists.
  • The platform supports a high level of automation, including the ability to capture end-to-end data lineage, identify sensitive data, understand usage and access patterns, and more. This makes it a powerful tool for managing a combination of on-premises data and cloud data. The product also includes industry-leading tools for metadata stewardship and data quality controls.

ASG Technologies

Value proposition for potential buyers: ASG Enterprise Data Intelligence (EDI) delivers a powerful and intuitive platform with a broad set of features and rich functionality. It includes tools for auto-discovery, cataloging, lineage, reference data management and governance. The vendor addresses the need to capture, manage and deliver data at web scale, through a secure portal. Gartner designated the firm a Leader in its 2018 Gartner MQ for Metadata management solutions.

Key values/differentiators:

  • ASG offers a high level of oversight and management, including the ability to monitor systems performance across teams, automate job processing, and schedule and automate workloads across the platform.
  • The vendor places a heavy emphasis on automating data inventory for regulatory compliance and agility. This includes GDPR and the recently passed California Consumer Protection Act (CCPA). The application automates scanning and identification of data in order to locate personally identifiable information (PII).
  • ASG EDI supports more than 220 data sources and numerous programming languages. The ability to bridge data silos makes the solution valuable for tackling numerous tasks, including gaining deeper insight into a supply chain. Moreover, the platform supports metadata exports to authorized downstream applications.

Collibra

Key values/differentiators: The vendor offers powerful data governance and cataloging capabilities designed to consume and manage data across an enterprise. Collibra takes a collaborative approach to managing metadata. It focuses on group interactions by establishing user roles for data ownership and consumption. The vendor has designed the platform to work with emerging digital technologies such as the Internet of Things (IoT), artificial intelligence (AI) and machine learning. Collibra was ranked as a “Leader” in the 2018 Gartner MQ for Metadata Management Solutions.

Value proposition for potential buyers:

  • The platform is highly flexible and configurable. It can address areas as diverse as financials, customers, products, services, supply chains or personnel. All of this metadata can be connected to risk, regulations and governance through overlays and specific policies and rules.
  • Collibra uses a ticketing approach to establish itself as the system of record for data. Gartner describes the approach as “innovative” and noted in the Metadata Management Solutions MQ that the vendor supports an understanding and trust of data at a deeper level.
  • The vendor has established Collibra University and Collibra Coaching Services to help customers learn how to use the application to maximum advantage. It also offers extensive on-demand webinars and numerous other resources.

DATUM

Key values/differentiators: DATUM excels at identifying and understanding relationships in large and complex sets of enterprise data. Its solution, Information Value Management, includes powerful tools for discovery, connecting, analyzing and measuring the impact of data. It also includes features for linking data to specific business goals and showing progress against goals. Gartner ranked DATUM as a “Leader” on its 2018 MQ for Metadata Management Solutions.

Value proposition for potential buyers:

  • Information Value Management is specifically designed to accommodate disparate enterprise data. It ties together several key metadata management tasks within a unified metadata management platform. These include performance management, process definition, classic data dictionaries, policy management and business glossaries.
  • The vendor offers persona-based business use cases that allow different users in an organization to accomplish tasks without heavy technical knowledge of data and metadata management. DATUM has designed its interface for a non-technical audience. It is known for ease-of-use.
  • The vendor’s focus is on managing business rules, processes and metrics that are most critical to the business. The application allows users to link fields, rules, standards, processes and metrics and view progress against goals through visual dashboards and detailed reports.

IBM

Key values/differentiators: IBM’s InfoSphere Information Governance Catalog delivers a broad set of tools and features that address metadata management. This includes a collaborative authoring environment that helps users create a central catalog of enterprise-specific terminology, including relationships to data assets, along with robust filters for understanding lineage and numerous data relationships. The platform addresses business requirements across numerous industry and data domains. IBM appeared as a “Leader” on the Gartner 2018 MQ for Metadata Management Solutions.

Key values/differentiators:

  • The metadata management solution includes powerful tools for browsing and searching for terms and categories within the catalog. This includes the ability to view definitions, usage, and related terms. Consequently, an analyst can view information governance rules and information assets that are related to the term and flesh out details about these assets.
  • IBM has adopted a Unified Governance and Integration Platform that streamlines and automates innovation in data and analytics governance and stewardship. The framework, based on IBM’s metadata and governance reference architecture, taps AI and machine learning, including through its public cloud offering, Watson Knowledge Catalog.
  • IBM has adopted an open framework for metadata management. It is collaborating with other vendors, including Hortonworks, to create more open and integrated data environments for metadata.

Informatica

Value proposition for potential buyers: Informatica delivers a comprehensive, unified view of metadata, business context, tagging, relationships, data quality, and usage. The platform is designed for a wide array of users, including data analysts, data scientists, data stewards, and data engineers. It includes tools for business, technical and operational metadata management, connectors, semantic search and browse, end-to-end data lineage, data relationship discovery, and impact analysis. Gartner ranked Informatica a “Leader” in its 2018 MQ for Metadata Management Solutions.

Key values/differentiators:

  • The company has large market share and considerable clout in metadata management. It offers a powerful and highly flexible approach focused heavily on information governance and analytics capabilities aligned with the firm’s platform and application-agnostic approach. This makes the solution valuable across numerous industries and infrastructures.
  • The vendor’s end-to-end approach—encompassing enterprise data catalog, data preparation, data security, stewardship, governance and analytics—is connected to robust glossaries and a rules management framework that creates a powerful unified enterprise metadata platform.
  • The company has an ambitious vision and roadmap to expand the platform and features. This approach has spurred growing adoption of the vendor’s metadata solution across numerous markets and industries.

Oracle

Value proposition for potential buyers: Oracle offers three metadata management solutions: Oracle Enterprise Metadata Management (OEMM), Oracle Data Relationship Management (DRM), and Oracle Enterprise Data Management Cloud. The vendor’s solutions address data requirements for both Oracle and non-Oracle environments. They include data quality tools, master data management solutions, enterprise applications, platforms and more. Gartner ranked Oracle a “Leader” in its 2018 MQ for Metadata Management Solutions.

Key values/differentiators:

  • The company’s approach to metadata management is appealing to a diverse array of organizations. The applications can harvest, process and catalog metadata across a variety of platforms and frameworks, including Hadoop, ETL engines, BI, data warehouses and CASE.
  • Oracle Enterprise Metadata Management expands on the concept of a basic metadata repository. It delivers interactive searching and browsing of metadata as well as providing data lineage, impact analysis, semantic definition and semantic usage analysis for any metadata asset within the catalog.
  • The vendor is focused on integrating core capabilities into its metadata management solutions. This includes integration with business continuity, data movement, data transformation, data governance, catalogs, analytics, and streaming data solutions. Oracle is also adopting innovative approaches to cataloguing cloud data.

SAP

Value proposition for potential buyers: SAP’s offers four solutions for metadata management: SAP PowerDesigner, SAP Enterprise Architecture Designer, SAP Information Steward for metadata management, and SAP Data Hub. The company’s focus is on delivering powerful capabilities for diverse on-premises and cloud-based systems. Although various products will work with outside applications and data repositories, a primary focus for SAP is on its own enterprise applications and on specific personas. Gartner ranked Oracle a “Visionary” in its 2018 MQ for Metadata Management Solutions.

Key values/differentiators:

  • SAP has expanded its metadata management offerings over the last few years. In 2017, it introduced SAP Data Hub, which addresses organizations’ needs to manage both active and passive metadata through agile and flexible orchestration. It is designed to discover, refine, enrich, and govern any type, variety, and volume of data across a distributed data landscape.
  • SAP Information Steward for metadata management is designed to handle a wide array of tasks associated with data cleansing and validation, taxonomy, insight, metadata management and governance. The solution supports numerous sources and file types and aims to deliver continuous insight into the quality of enterprise information. It is available on-premises as well as in the cloud.
  • The vendor has recently focused on improving its cloud-based metadata management architecture for the cloud through SAP PowerDesigner and Information Steward. This has expanded the use cases and personas the solutions support.

Smartlogic

Value proposition for potential buyers: The semantic AI platform from Smartlogic is designed to “transform data into knowledge” by putting metadata to work effectively. It ingests and analyzes diverse data in order to reveal targeted contextual data for tasks such as improving customer experience, contract lifecycle management, records management, data and text analytics, process automation, regulatory compliance, and information security. This makes it attractive across numerous industries, including healthcare, life sciences, media, financial services, and manufacturing. Gartner rated Smartlogic a “Leader” in its 2018 MQ for Metadata Management Solutions.

Key values/differentiators:

  • The firm offers strong semantic capabilities related to metadata management through its product Semaphore. This includes classifying and managing diverse data sets. The solution incorporates AI, natural language processing and machine learning to find and manage data relationships more effectively.
  • Semaphore uses a high level of automation and auto-classification to achieve robust information governance and metadata management. It also incorporates powerful data auditing tools and document fingerprinting in order to identify and secure important and sensitive data assets.
  • The vendor’s approach, which revolves around a metadata hub, allow it to break down many traditional data silos resulting from isolated applications and data repositories. The solution’s semantic capabilities contribute to delivering a faster and more accurate data management framework.

Metadata Management Vendors At-a-Glance

 

Vendor

 

Focus

 

Key Differentiator

 

Key features

 

Alation

 

Metadata management for diverse user groups, with an emphasis on collaboration.

 

Uses active metadata and trust models for decision-making. Strong partnerships and rich collaboration tools.

 

Offers automation to build models for decision-making; strong data sharing and filters.

 

Alex Solutions

 

Robust metadata framework for different user groups and personas.

 

Delivers a marketplace for metadata through an approach that includes a robust data catalog.

 

Powerful tools for end-to-end data lineage, spotting sensitive data, and understanding usage and access behavior.

 

ASG Technologies

 

Auto-discovery, cataloging, lineage, reference data management and governance for diverse enterprise data. Offers a secure portal for managing metadata.

 

Offers a high level of oversight and management, including monitoring systems performance across teams, automating job processing, and schedule and automate workloads across the platform.

 

Supports more than 220 data sources and numerous programming languages.

 

Collibra

 

Powerful and highly flexible data governance and cataloging capabilities that consume and manage data across an enterprise.

 

Highly configurable environment for different industries, types of business, risks and regulations, and personas. Designed to work with emerging technologies like the IoT and machine learning.

 

Proprietary “ticketing” approach delivers strong metadata support. Strong educational and support framework.

 

DATUM

 

Identifying and understanding relationships in large and complex sets of enterprise data.

 

Offers persona-based business use cases that allow different users to accomplish tasks without technical knowledge of metadata management.

 

Allows users to link fields, rules, standards, processes and metrics—and view progress against goals through visual dashboards and detailed reports.

 

IBM

 

Offers a broad set of tools and features built around collaborative authoring.

 

Addresses business and metadata requirements across numerous industry and data domains through an open framework.

 

Creates a central catalog of enterprise-specific terminology, including relationships to data assets, along with robust filters, for understanding lineage and data relationships.

 

Informatica

 

Delivers a comprehensive, unified view of metadata, business context, tagging, relationships, data quality, and usage across numerous user groups and personas.

 

A flexible approach focused heavily on information governance and analytics capabilities aligned with the firm’s platform and application-agnostic approach.

 

Offers an enterprise data catalog, data preparation, data security, stewardship, governance and analytics—all connected to robust glossaries and a rules management framework.

 

Oracle

 

Three metadata management solutions that address requirements for both Oracle and non-Oracle systems.

 

Delivers interactive searching and browsing of metadata as well as providing data lineage, impact analysis, semantic definition and semantic usage analysis for any metadata asset within the catalog

 

Solutions can harvest, process and catalog metadata from diverse platforms and frameworks, including Hadoop, ETL engines, BI, data warehouses and CASE.

 

SAP

 

Four solutions designed for different enterprise metadata management requirements, both on premise and in the cloud.

 

SAP is a logical choice for those using the firm’s enterprise solutions. However, it is expanding use cases and personas to include other platforms and data.

 

Data Hub addresses both active and passive metadata through agile and flexible orchestration. Information Steward addresses data cleansing and validation, taxonomy, insight, metadata management and governance.

 

Smartlogic

 

A semantic AI platform that aims to “transform data into knowledge” through automated AI-based metadata management.

 

Delivers automation and auto-classification to achieve robust information governance and metadata management across numerous industries.

 

Incorporates powerful data auditing tools and document fingerprinting in order to identify and secure important and sensitive data assets.

 

]]>
What is Artificial Intelligence & How Does It Work? https://www.datamation.com/artificial-intelligence/what-is-artificial-intelligence/ Fri, 24 May 2019 08:00:00 +0000 http://datamation.com/2019/05/24/what-is-artificial-intelligence/

The term artificial intelligence (AI) refers to computing systems that perform tasks normally considered within the realm of human decision making. These software-driven systems and intelligent agents incorporate advanced data analytics and Big Data applications. AI systems leverage this knowledge repository to make decisions and take actions that approximate cognitive functions, including learning and problem solving.

AI, which was introduced as an area of science in the mid 1950s, has evolved rapidly in recent years. It has become a valuable and essential tool for orchestrating digital technologies and managing business operations. Particularly useful are AI advances such machine learning and deep learning.

See our list of the top artificial intelligence companies

It’s important to recognize that AI is a constantly moving target. Things that were once considered within the domain of artificial intelligence – optical character recognition and computer chess, for example – are now considered routine computing. Today, robotics, image recognition, natural language processing, real-time analytics tools and various connected systems within the Internet of Things (IoT) all tap AI in order to deliver more advanced features and capabilities.

Helping develop AI are the many cloud companies that offer cloud-based AI services. Statistica projects that AI will grow at an annual rate exceeding 127% through 2025.

By then, the market for AI systems will top $4.8 billion dollars. Consulting firm Accenture reports that AI could double annual economic growth rates by 2035 by “changing the nature of work and spawning a new relationship between man and machine.” Not surprisingly, observers have both heralded and derided the technology as it filters into business and everyday life.

Also see: AI Jobs

artificial intelligence

Artificial intelligence has broad applications across many areas of business.

History of Artificial Intelligence: Duplicating the Human Mind

The dream of developing machines that can mimic human cognition dates back centuries. In the 1890s, science fiction writers such as H.G. Wells began exploring the concept of robots and other machines thinking and acting like humans.

It wasn’t until the early 1940s, however, that the idea of artificial intelligence began to take shape in a real way. After Alan Turing introduced the theory of computation – essentially how algorithms could be used by machines to produce machine “thinking” – other researchers began exploring ways to create AI frameworks.

In 1956, researchers gathering at Dartmouth College launched the practical application of AI. This included teaching computers to play checkers at a level that could beat most humans. In the decades that followed, enthusiasm about AI waxed and waned.

In 1997, a chess-playing computer developed by IBM, Deep Blue, beat reigning world chess champion, Garry Kasparov. In 2011, IBM introduced Watson, which used far more sophisticated techniques, including deep learning and machine learning, to defeat two top Jeopardy! champions.

Although AI continued to advance over the next few years, observers often cite 2015 as the landmark year for AI. Google Cloud, Amazon Web Services, and Microsoft Azure and others began to step up research and improve natural language processing capabilities, computer vision and analytics tools.

Today, AI is embedded in a growing number of applications and tools. These range from enterprise analytics programs and digital assistants like Siri and Alexa to autonomous vehicles and facial recognition.

Different Forms of Artificial Intelligence

Artificial intelligence is an umbrella term that refers to any and all machine intelligence. However, there are several distinct and separate areas of AI research and use – though they sometimes overlap. These include:

  • General AI. These systems typically learn from the world around them and apply data in a cross-domain way. For example, DeepMind, now owned by Google, used a neural network to learn how to play video games similar to how humans play them.
  • Natural Language Processing (NLP). This technology allows machines to read, understand, and interpret human language. NLP uses statistical methods and semantic programming to understand grammar and syntax, and, in some cases, the emotions of the writer or those interacting with a system like a chat bot.
  • Machine perception. Over the last few years, enormous advances in sensors — cameras, microphones, accelerometers, GPS, radar and more — have powered machine perception, which encompasses speech recognition and computer vision used for facial and object recognition.
  • Robotics. Robot devices are widely used in factories, hospitals and other settings. In recent years, drones have also taken flight. These systems — which rely on sophisticated mapping and complex programming—also use machine perception, to navigate through tasks.
  • Social intelligence. Autonomous vehicles, robots, and digital assistants such as Siri and Alexa require coordination and orchestration. As a result, these systems must have an understanding of human behavior along with a recognition of social norms.

Methods of Artificial Intelligence

There are a number of approaches used to develop and build AI systems. These include:

  • Machine Learning (ML). This branch of AI uses statistical methods and algorithms to discover patterns and “train” systems to make predictions or decisions without explicit programming. It may consist of supervised and semi-supervised ML (which includes classifications and labels) and unsupervised ML (using only data inputs and no human applied labels).
  • Deep Learning. This approach relies on artificial neural networks (ANNs) to approximate the neural pathways of the human brain. Deep learning systems are particularly valuable for developing computer vision, speech recognition, machine translation, social network filtering, video games and medical diagnosis.
  •  Bayesian Networks. These systems rely on probabilistic graphical models that use random variables and conditional independence to better understand and act on the relationships between things, such as a drug and side effects or darkness and a light switch turning on.
  • Genetic Algorithms. These search algorithms tap a heuristic approach modeled after natural selection. They use mutation models and crossover techniques to solve complex biological challenges and other problems.

Uses of Artificial Intelligence

There is no shortage of compelling use cases for AI. Here are some leading examples:

Healthcare

Artificial intelligence in healthcare can play a leading role. It enables health professionals to understand risk factors and diseases at a deeper level. It can aid in diagnosis and provide insight into risks. AI also powers smart devices, surgical robots and Internet of Things (IoT) systems that support patient tracking or alerts.

Agriculture

AI is now widely used for crop monitoring. It helps farmers apply water, fertilizer and other substances at optimal levels. It also aids in preventative maintenance for farm equipment and it is spawning autonomous robots that pick crops.

Finance

Few industries have been transformed by AI more than finance. Today, quants (algorithms) trade stocks with no human intervention, banks make automated credit decisions instantly, and financial organizations use algorithms to spot fraud. AI also allows consumers to scan paper checks and make deposits using a smartphone.

Retail

A growing number of consumer-facing apps and tools support image recognition, voice and natural language processing and augmented reality (AR) features that allow consumers to preview a piece of furniture in a room or office or see what makeup looks like without heading to a physical store. Retailers are also using AI for personalized marketing, managing supply chains, and cybersecurity.

Travel, Transportation and Hospitality

Airlines, hotels, and rental car companies use AI to forecast demand and adapt pricing dynamically. Airlines also rely on AI to optimize the use of aircraft for routes, factoring in weather conditions, passenger loads and other variables. They can also understand when aircraft require maintenance. Hotels are using AI, including image recognition, for deploying robots and security monitoring. Autonomous vehicles and smart transportation grids also rely on AI.

Benefits & Risks Artificial Intelligence

For businesses, it’s not a question of whether to use AI — many organization already taps into it on a daily basis —it’s a question of how to maximize the benefits and minimize the risks.

As starting point, it’s essential to know how and where AI can improve business processes and build a workforce that understands what artificial intelligence is, where it fits in and what opportunities it offers. This may require workers to have new knowledge and skills – and AI salaries are competitive – along with a rethinking of service providers, workflows and internal processes.

Artificial intelligence serves up other challenges. One of the biggest stumbling points for AI, including machine learning and deep learning, is poorly constructed frameworks. When users train models with bad data or construct flawed statistical models, incorrect and even dangerous outcomes often follow.

AI tools, while increasingly easy to use, require data science expertise. Other important factors include: ensuring there’s enough computing power and the right cloud-based infrastructure in place, and, mitigating fears about job loss.

In any case, artificial intelligence is introducing bold opportunities to create smarter and more powerful machines. In the years ahead, AI will certainly further transform business and life.

]]>
Data Center Tiers Explained: Tier Specifications & Requirements https://www.datamation.com/data-center/data-center-tiers/ Thu, 02 May 2019 08:00:00 +0000 http://datamation.com/2019/05/02/data-center-tiers-formulating-a-strategy/

Data centers serve as the foundation for the modern enterprise. They power the IT systems that run complex businesses, including servers, backup systems, telecommunications equipment, and numerous other technology components. More than 8.5 million data centers exist worldwide, and the figure in the U.S. now tops 3 million.

Keeping systems up and running is paramount. However, not all IT infrastructure is created equal — and not all business needs are the same. That’s why data center tiers exist. The term refers to different types of systems, components and infrastructure that are arranged into groups, or tiers.

Each tier is designed to address specific IT and equipment requirements. Tier 1, for instance, involves basic infrastructure requirements, while Tier 4 is comprised of the most complex components. Let’s look at each of the Tiers.

Data Center Tier Standards and Requirements

Understanding how data center tiers work is crucial to designing an effective IT strategy.  The tier classification system was created by the Uptime Institute in the mid 1990s. Since then, the framework has evolved from a shared industry terminology to a global standard that includes third-party validation of data center critical infrastructure.

The tiering system is progressive, meaning that each tier is dependent on the tier below it and incorporates the requirements in the lower tier.

Here is the role that each tier plays in the data center:

Tier I. Basic Capacity

A Tier I facility incorporates dedicated site infrastructure to support information technology beyond an office setting. It can be thought of as a tool for achieving a tactical level of operational sustainability. The Uptime Institute notes that an organization’s Tier I needs are primarily driven by time-to-market and first-cost issues rather than lifecycle costs. It typically includes:

  • Dedicated space for IT systems, such as servers and backup devices.
  • Uninterruptible power supply (UPS) devices to handle power fluctuations and failures.
  • Cooling equipment to keep systems running at optimal temperatures.
  • Engine/power generators to keep systems operating and online during extended power outages.

Tier II. Redundant Capacity Components

A Tier II data center framework encompasses redundant critical power and cooling components. These provide a margin of safety from power disruptions and other major events that can force system shutdowns or cause damage. Tier II is also tactical in nature. Most organizations that rely on Tier I and Tier II capabilities don’t require real-time capabilities, according to the Uptime Institute. Tier II systems and devices typically include:

  • Power and cooling equipment such as UPS modules, chillers or pumps.
  • Engine generators.

Tier III. Concurrently Maintainable Systems

Tier III facilities can operate without impact to IT systems during equipment upgrades, changes or maintenance. A data center that boasts Tier III capabilities avoids shutdowns by establishing redundant delivery paths for power and cooling. Uptime Institute notes that Tier III (and Tier IV) site infrastructure solutions have an effective life beyond the current IT requirement. What’s more, they are typically utilized by organizations that recognize a cost of a disruption — in terms of actual dollars as well as market share.

Tier IV. Fault Tolerance

The infrastructure in a Tier IV data center builds on the capabilities of a Level III facility by building fault tolerance into the topology of the site. Fault tolerance describes the ability to limit the effects of a disruption or interruption before they reach IT operations. It is particularly important for mission critical applications and systems. This tier represents the most strategic level of protection. It often involves multiple redundant systems.

Data Center Tiers at a Glance

Tier Overview Key Components Uptime Risk
Tier I Tactical. Has a single path for power and cooling, few redundant and backup components. UPS modules, cooling equipment, power generators. 99.671% (28.8 hours of downtime annually). Medium to high.
Tier II Tactical Has a single path for power and cooling with some redundant and backup components. UPS modules, chillers, pumps, power generators. 99.741% (22 hours of downtime annually) Medium.
Tier III Strategic. Has multiple paths for power and cooling and systems so that upgrades, updates and maintenance can take place without taking the data center offline. UPS modules, chillers, pumps, power generators (more extensive deployment). 99.982% (1.6 hours of downtime annually). Medium to low.
Tier IV Strategic. Completely fault tolerant with redundancy for every component. UPS modules, chillers, pumps, power generators (most extensive deployment). 99.995% (26.3 minutes of downtime annually). Low.

The Evolution of Data Centers

As data centers have become more complex and interconnected — and as high-performance computing (HPC) workloads and cloud computing have moved into the mainstream of the enterprise — tiering strategies have had to address new and sometimes different issues. Make no mistake, modern facilities don’t resemble the data centers of the 1980s, which often housed mainframe or midrange super computing systems.

In order to adopt the right data center tiers, it’s particularly important to examine how a service provider has built and maintained its data centers. There are several key factors, and, in some cases, these may intersect with industry groups and standards, such as those created by the Telecommunications Industry Association (TIA). Key factors may include:

  • Geographic location/risk and earthing standards
  • Commercial building standards.
  • Cabling standards.
  • Cooling systems.
  • Fiber-optic coding.
  • Operational practices.
  • Equipment standards, including maintenance and replacement of UPS units, generators and other devices.

Data Center Tiers and Cloud Computing

As organizations look to build out a more agile and flexible IT framework, colocation and various cloud computing facilities — including hybrid clouds — require close scrutiny and different considerations. Indeed, today’s data centers are often one large private cloud.

  • These facilities must deliver the level of availability businesses require — typically across multiple locations. Given the growth of multicloud computing, these multiple locations often entail a multicloud strategy.
  • Because an organization doesn’t have control of systems in the data center, service level agreements (SLAs) are a key consideration.
  • The Uptime Institute has expanded its tier ratings to include operational sustainability. This addresses staff and operational components, including prioritized behaviors and risks. It’s an aspect that businesses should consider as they move to cloud platforms and possibly Infrastructure-as-a-Service (IaaS).

How to Formulate a Data Center Strategy

A starting point for navigating data center tiers is to understand your organization’s needs and how a tiering strategy fits in — particularly as the march to the cloud accelerates. As businesses turn to outside data center providers for services, and in some cases use cloud companies, the challenges of vetting vendors grow.

Consequently, many organizations are turning to outside consultants and organizations to assess data centers based on their adherence to specific tiers. However, the Uptime Institute is the only organization that will rate and certify a data center. This ensures that a facility was constructed as designed, that site functions and equipment work as billed, and that the data center can demonstrate performance standards that match its tier designation. However, it’s also important to note that some companies now boast standards that exceed Uptime Institute and TIA.

It’s also important to recognize that tiering isn’t the only consideration in establishing or choosing a data center provider. Configuration management, security, utilization levels, licensing requirements, disaster recovery and, possibly, rack capacity and density issues are also important considerations. For business looking to tie in data center tiering, it’s also essential to understand the specific services offered and what quality of service (QoS) levels are guaranteed.

Nevertheless, a focus on data center tiers can help an enterprise boost performance, trim costs and ensure that the business supports current and future technology and performance requirements. According to AFCOM’s 2018 State of the Data Center Industry study, the need for diligence is growing. Nearly 60 percent of companies currently own between two and nine data center facilities but the average number of data centers that organizations manage will grow to 10.2 by 2021. Moreover, many existing facilities will require renovation and changes.

To be sure, as organizations grow and evolve — and business-critical services become mandatory — executing on a data center tiering strategy is increasingly important. It’s at the center of successful IT and business.

Key Steps to Improving Your Tiers

It’s important to match your organization’s requirements with appropriate data center tiers. Here are some tips for determining what your enterprise requires and establishing appropriate data center tiering levels:

Step 1: Assess Your Enterprise and your requirements.

Understand the specific availability and performance requirements of various enterprise systems and applications. For example, a branch office at a financial services firm doesn’t require the same level of performance and protection as an e-commerce system or a securities trading system. It may be necessary to extend this assessment to supply partners and customers.

Step2: Identify Risk Levels.

What types of disruptions or interruptions are possible? How will an interruption or system failure impact the business? What will it cost in terms of money and reputation for a system or application to go down for a few minutes or a few hours? It’s critical to document the costs of a business interruption and compare it to the cost of a maintaining or purchasing a specific data center tier.

Step 3: Assess Internal Needs and/or Outside Providers.

Determine what changes you will need to make if you are operating your own data center. If you are looking to an external provider for compute, bandwidth or cloud resources, determine the vendor’s certifications, rankings, policies, procedures, protections and what SLA they offer (including compensation for failure to meet standards).

Step 4: Map Systems to Tiers.

With all the necessary information on hand it’s possible to determine what makes sense — and what maximizes dollars — while reducing risk. It’s vital to avoid ambiguous language and decision-making and ensure that all tiering decisions are based on concrete and defensible criteria. It’s also paramount to build in notification systems to know if and when a data center doesn’t perform to established tiering criteria.  

Data Centers Tier Strategy: The Final Frontier

The most important step in your data center tiering strategy? Understanding exactly how important it is to have a fully developed strategy.

A focus on data center tiers can help an enterprise boost performance, trim costs and ensure that the business supports current and future technology and performance requirements. According to AFCOM’s 2018 State of the Data Center Industry study, the need for diligence is growing.

Nearly 60 percent of companies currently own between two and nine data center facilities but the average number of data centers that organizations manage will grow to 10.2 by 2021. Moreover, many existing facilities will require renovation and changes.

To be sure, as organizations grow and evolve — and business-critical services become mandatory — executing on a data center tiering strategy is increasingly important. It’s at the center of successful IT and business.

]]>