AWS re:Invent Recap: SageMaker Data Wrangler

What happened?

The new service, SageMaker Data Wrangler, was announced during Andy Jessy’s 2020 re:Invent Keynote. Incorporated into AWS SageMaker, this tool simplifies the data preparation workflow so the entire process can be done from one central interface.

Why is it important?

  • SageMaker Data Wrangler contains over 300 built-in data transformations to normalize, transform, and combine features without having to write any code.
  • With SageMaker Data Wrangler’s visualization templates, transformations can be previewed and inspected in Amazon SageMaker Studio.
  • Data can be collected from multiple data sources and imported in one single go for data transformations.
  • Data can be in various file formats, such as CSV files, Parquet files, and database tables.
  • Data preparation workflow can be exported to a notebook or a code script for Amazon SageMaker pipeline or future use.

Why We’re Excited

SageMaker Data wrangler makes it easier for data scientists to prepare data for machine learning training using existing pre-loaded data preparation options. With preparation completed more quickly, our data science teams can accelerate the delivery of solutions to clients at a much faster pace.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

re:Invent 2020 Recap: Partner Keynote Announcements

In case you missed it, here are the major announcements and newest competencies from the re:Invent 2020 Partner Keynote on Wednesday, December 3rd, 2020:

  1. AWS Saas Boost supports accelerated modernization efforts by removing the heavy lifting of taking existing applications into the cloud. SaaS Boost is available open-source as a ready-to-use reference environment that enables Independent Software Vendors (ISVs) to accelerate the move to Software-as-a-Service (SaaS). For businesses of all sizes, AWS SaaS Boost helps ISVs migrate applications to AWS with minimal changes at a more rapid rate.
  2. AWS ISV Partner Path: Beginning in January 2021, any AWS Partner with a software solution that runs on or is integrated with Amazon Web Services (AWS) can join the new AWS ISV Partner Path that helps customers identify AWS-reviewed solutions. It aims to accelerate the engagementISVs have with AWS and shifts focus from partner-level badging to solution-level badging to meet customer needs.
  3. Managed Entitlements are now available in AWS Marketplace. This simple and automated license tracking helps make governance, compliance, and distribution of software license entitlement management easier. This lets buyers monitor their software license entitlements, providing visibility that helps ensure accurate license usage tracking. ISVs can use AWS License Manager to create and manage user licenses for products used on AWS and on-premises. This is effective immediately for over 7,000 products currently listed in AWS Marketplace.
  4. Private Marketplace APIs: AWS Marketplace now enables buyers to manage their Private Marketplace using a set of publicly accessible APIs. The Marketplace helps customers navigate these products and ISVs in their journey to transform, modernize, and govern by curating a catalog of approved third-party software solutions available.
  5. The AWS Service Catalog App Registry serves as the central repository to define and associate resources to better manage applications. Maintain a single source of truth with the integration of AppRegistry into application development processes to create application definitions and resource collections. Builders can define AWS CloudFormation stacks, metadata that describes partner-built AWS applications, descriptions, and attribute group associations. This helps ensure critical information like organizational ownership, data sensitivity, and cost center are up to date for IT leaders and business stakeholders. It also makes the procurement process simpler and more seamless for customers and ISVs.
  6. Professional Services in AWS Marketplace makes is easier to find and buy services to configure, deploy, and manage third-party software. Itallows partners to reach customers in new ways since they can now publish services in the same place as software, simplifying the contract process for buyers and sellers. For sellers, this new feature gives sellers an opportunity to reach new prospective customers by listing professional service offerings as individual products or bundle with existing software products in AWS Marketplace using pricing, payment schedule, and service terms independent from the software. For buyers, gaining access to professional services gives more choice to multiple trusted sellers and a much easier way to manage payment of both software and services provided.
  7. Newest Competencies Announced:
    • Mainframe Migration Competency: This recognizes AWS Partners with proven technology, customer success, mature practices, and a track record in migrating both mainframe applications, workload migrations, and data to AWS.
    • Public Safety & Disaster Recovery (expanded to include Technology Partners): This competency signifies specialized and dedicated AWS Technology Partners that help customers improve preparation, response, and recovery from emergencies and disasters.
    • Energy: This highly specialized designation will showcase AWS Partners who have completed a thorough technical validation with AWS and demonstrated continued success in supporting unique energy needs.
    • Travel & Hospitality: Partners with this competency help customers accelerate their digital transformation efforts across marketing and sales, customer experience, core operations, finance, human resources, and IT departments for travel and hospitality organizations build a resilient business and accelerate innovation.

AWS re:Invent 2020 Keynote Service Announcements

Andy Jassy Keynote Service Announcements

There were many major product launch and update announcements during Andy Jassy’s re:Invent Keynote presentation. We put together a list of these awesome technologies by service area to give you a quick overview on what they are and why they matter:

COMPUTE

Habana Gaudi based Amazon EC2 Instances will be available the first half of 2021 Powered by New Habana Gaudi pre-processors from Intel, users can expect a 40% better price/performance over the current GPU based EC2 instances. Built specifically for ML training, these instances work seamlessly with TensorFlow and Pytorch.

AWS Trainium is an ML training chip custom designed by AWS to deliver most cost-effective training in cloud. It supports PyTorch, MXNet, and TensorFlow using the same Neuron SDK Inferentia uses. With both Trainium and Inferentia, customers will have an end-to-end flow of ML compute from scaling training workloads to deploying accelerated inference. AWS Trainium be available as an EC2 instance and in Amazon Sagemaker in the 2nd half of 2021.

CONTAINERS

Amazon ECS Anywhere lets you run ECS in your own datacenter. Using the same AWS style APIs, cluster management, and workload scheduling and monitoring tools, ECS Anywhere works on any infrastructure (cloud, on-prem, etc.) to enable accelerated transitions.

Amazon EKS Anywhere will also be available and lets you run EKS in your own data environment. Leverage your EKS experience to setup, upgrade, and operate on-prem kubernetes clusters. Amazon EKS distro is open sourced.

STORAGE

Amazon Elastic Block Storage Offers 2 New Levels:

  • GP3 can reduce expenses at 20% better cost per gigabyte. This lets customers independently increase IOPS and throughput without provisioning additional block storage capacity. Gp3 can provide predictable 3,000 IOPS baseline performance and 125 MiB/s regardless of volume size. Customers looking for higher performance can scale up to 16,000 IOPS and 1,000 MiB/s for an additional fee. This is great for applications that require high performance at lower costs such as MySQL, Cassandra, virtual desktops, and Hadoop analytics.
  • io2 Block Express, the first san built for the cloud, takes advantage of advanced communication protocols driven by the AWS Nitro System to allow for up to 256K IOPS & 4000 MBps of throughput and a maximum volume size of 64 TiB, all with sub-millisecond, low-variance I/O latency.
SERVERLESS

Aurora Serverless 2, scale to hundreds of thousands of transactions in a fraction of a second and enable up to 90% cost savings compared to provisioning for peak capacity. Multi AZ support, Global Database, Read-Replicas, Backtrack and Parallel query features available. SQL is available now and PostgresSQL will be available early next year.

AWS Proton will help in building microservices by building a stack and provisioning AWS services using parameters to push code, deploy, and set up monitoring and alarms. For example, if the central engineering team makes a change in the stack, down-service microservices teams can be notified. This helps optimize the deployment of serverless applications. It’s free of charge, as you only pay for the underlying services and resources.

DATABASE

Babelfish for Aurora PostgreSQL presents a new translation capability to complement Schema Conversion tool and AWS Database Migration Service. With new translation capability to easily run SQL server applications on Aurora PostgresSQL with little or no code changes. Schema and data can be migrated using SCT and DMS. Then, application configuration can be updated to point to Aurora instead of SQL server. This will be available open source.

AWS Glue Elastic Views Set up a materialized view to copy that data to a target store and manages all dependencies from those steps. If something changes, elastic takes that and applies it. If data structure changes, this alerts the person to make necessary adjustments. AWS Glue Elastic Views is serverless and scales capacity up or down automatically based on demand, so there’s no infrastructure to manage.

Amazon QuickSight Q As the first Business Intelligence (BI) service with Pay-per-Session pricing. Ask any question and get answers in seconds. Trained over many data points and business areas, Amazon Quicksight Q uses NLP to understand domain specific business language to auto generate data models that understand meanings and relationships of data.

MACHINE LEARNING & ARTIFICIAL INTELLIGENCE

SageMaker DataWrangler Aggregate and prepare ML features to speed up data preparation. Point it at a data store, make use of over 300 built-in transformations, which are suggested automatically. Import and inspect data to identify the various types, recommend transformations, and apply it to the entire data set, all infra-managed under the covers. This data preparation is made available for inference in real time.

SageMaker Feature Store is used with ML as a purpose-built feature store. This tool makes it much simpler to name, organize, and find and share SageMaker data with teams. It also enables ease of accessibility for both training and inference. Because it is located in Sagemaker, development teams will experience really low latency for inference building machine learning models.

Amazon SageMaker Pipelines is the first Purpose-built CI/CD service for ML that automates different steps of the ML workflow such as data loading, data transformation, training and tuning, and deployment. Create, automate, and manage end-to-end ML workflows at scale with the peace of mind knowing various versions are stored in a central repository.

Amazon DevOps Guru automatically detects operational issues early and provides recommended actions to take that address the problem.

Amazon Monitron is an end-to-end system that leverages machine learning (ML) to detect abnormal behavior in industrial machinery that alerts teams of the need for predictive maintenance to help reduce unplanned downtime.

AMAZON CONNECT

Amazon Connect Wisdom uses ML to deliver real time product and customer info can integrate Salesforce and ServiceNow – as a call is happening, wisdom takes the call transcription to put the right info on the screen to the info needed around what to do when a given situation happens. This is a game-changer for customer support processes.

Amazon Connect Customer Profiles This presents a unified profile of a customer to the representative during a call. Databases will launch profiles (from Zendesk, Marketo, ServiceNow, etc.) and connects contact ID with a customer ID assigned consistently across all data stores. This helps normalize information from various platforms and displays it in a concerted and organized way. Agents have access to all information that might help change how they can have better, more holistic customer interactions.

Real-Time Contact Lens for upper level management is a sophisticated machine learning tool used to detect customer experience issues during live calls. Leverage ML expertise undercover to have better impact on calls and customer experience in real time. Criteria-based alerts are sent to ensure customers aren’t asked the same questions again, minimizing frustration to enable real-time resolution.

EDGE COMPUTING

AWS Outposts offers a hybrid solution for access to the familiar and reliable AWS infrastructure, AWS services, APIs, and tools to any datacenter, co-location space, or on-premises facility to build, manage, and scale your on-premises applications for a hybrid solution. AWS Outposts are meant for workloads requiring low latency access to on-premises systems, local data processing, data residency, and migration of system-interdependent applications.

AWS Wavelength Zones are an AWS Infrastructure offering that provides optimized service for mobile edge computing applications. They enable application traffic from 5G devices to reach application servers without leaving the telecommunications network, so developers can now build the next generation of ultra-low latency applications.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: MacOS Instances for Amazon EC2

Andy Jessy Keynote Service Announcements

Developers will now have a virtual environment to leverage for designing apps for the Mac, iPhone, and other Apple devices, powered by Mac Minis.

What Happened: Amazon Expands App Development & Testing Capabilities with Native Mac Instances for AWS

On Monday’s reInvent Kickoff, Amazon announced the availability of macOS instances on AWS via the Amazon Elastic Compute Cloud (EC2), a welcomed alternative to Microsoft Windows and open-source Linux. Powered by Mac mini hardware and the AWS Nitro System, these Amazon EC2 Mac instances can be used to build, test, package, and sign Xcode applications for the Apple platform including macOS, iOS, iPadOS, tvOS, watchOS, and Safari.

Why It’s Important

Since no other major cloud provider has computing instances running MacOS, Apple Developers have a whole new world of opportunities to develop & test creative applications faster and AWS Partners will be able to provide more powerful development capabilities for clients. It’s a competitive win-win for everyone – AWS, AWS consulting partners, the development community as a whole, and the users who will eventually use the apps.

With this, the Mac minis operate as fully integrated and managed instances like other Amazon EC2 instances, enabling developers to natively run macOS in Amazon Web Services. With immediate access to the virtual macOS environments to build and test applications, development teams and organizations can innovate more quickly and bring products faster to market.

Apple developers benefit from the flexibility, scalability, security, reliability and cost benefits of AWS.

Availability

Mac instances are available On-Demand at a rate of $1.083 per hour or with Savings Plans. Currently offering macOS Mojave (10.14) and macOS Catalina (10.15) operating systems. Supported regions include the U.S. East (N. Virginia), U.S. East (Ohio), U.S. West (Oregon), Europe (Ireland), and Asia Pacific (Singapore) with more to come. Learn more in the featured AWS Video or contact an Idexcel expert to get started.

The Best of Both: Serverless and Containers with AWS Fargate and Amazon EKS

Co-Authored by: Pradeepta Sahu, DevOps Lead & Sidharth Parida, DevOps Engineer

When enterprises require more control over the components of their applications used, they move away from infrastructure management by eliminating SaaS-based infrastructure (with EC2) and migrating to automated Containers Services (CaaS). Through this migration to CaaS, companies gain flexibility and agility in DevOps because their contracted structure is not associated with a specific machine. This approach uses AWS Cloud resources like AWS Fargate for Amazon EKS to overcome the disadvantages of OS virtualization (i.e. Running Multiple OSs on a physical server), by introducing containers that give teams more control over the software delivery model.

Our Idexcel DevOps team has created a strategic solution using AWS Fargate for  Amazon EKS that reduces development costs on new projects. This managed Microservices-based platform breaks down the industry burden of managing into more easily managed Serverless Kubernetes infrastructures. Why are these monolithic applications such a challenge? They are tightly coupled and entangled as an application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability. So, a single point of failure would shut down the entire production until the necessary recovery actions are taken.

Since SaaS-based applications are fully managed Monolithic and control lies with the primary cloud provider, organizations are realizing the critical need to have more control of their infrastructure in the cloud. AWS Fargate delivers serverless container capabilities to Amazon EKS, which combines the best of both Serverless and Container benefits. With Serverless capabilities, developers don’t need to worry about purchasing, provisioning, and managing backend servers. Serverless architecture is also indefinitely scalable and easy to deploy with plug-and-play features. This integration between Fargate and EKS enables Kubernetes users to transform a standard pod definition into a Fargate deployment. Fargate is a serverless compute engine for containers that removes the need to provision and manage servers. It allocates the right amount of compute needed for optimal performance by eliminating the need to choose instances and automatically scaling the cluster capacity.

This means that EKS can support Fargate to provide serverless compute engines for containers by reducing provisioning, configuring, or scaling virtual machine groups to run containers. EKS does this by facilitating the existing nodes (managed nodes) to communicate with Fargate pods in an existing cluster that already has worker nodes associated with it.

Major Advantages of Fargate Managed Nodes

Faster DevOps Cycle = Faster time to market: By removing the contracted structure tied to specific machines and leveraging cloud resources, DevOps teams increase deployment agility and flexibility to launch solutions at a quicker pace.  

Increased Security: Fargate and EKS are both AWS Managed Services that provide serverless and Kubernetes configuration management, safely and securely within the AWS ecosystem. 

Combines the Best of Both Serverless & Containers:  Fargate provides serverless computing with Containers. This combination of technologies enables developers to build applications with less costly overhead and greater flexibility than applications hosted on traditional servers or virtual machines.

Enhanced Flexibility and Scalability: Any Kubernetes microservices application can be migrated to EKS easily with infinite scalable serverless capability.

Reduced Costs: With Containerization, overhead costs are reduced through the elimination of on-premises servers, network equipment, maintenance of server management maintenance, and patch/cluster management.

In this next section, we’ll illustrate how to control the resource configuration in Fargate Nodes in Amazon EKS and administer the Kubernetes Nodes on AWS Fargate without needing to stand up or maintain a separate Kubernetes control plane.

Kubernetes Cluster Management in Amazon Cloud

Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS automatically detects and replaces unhealthy control plane instances and provides automated version upgrades/patching for them.

Amazon EKS is also integrated with many other available AWS services to provide scalability and security for applications, including the following:

Fargate with Managed Nodes
The goal achieved through this solution is a more flexible and controlled Kubernetes infrastructure to make sure pods on the worker nodes can communicate freely with the pods running on Fargate. These Fargate pods are automatically configured to use the cluster security group for the cluster they are associated with. Part of this includes making sure that any existing worker nodes in the cluster can send and receive traffic to and from the cluster security group. Managed node groups are automatically configured to use the cluster security group, alleviating the need to modify or check for compatibility.

Our Solution Architecture

1. Create the Managed Node Cluster

Prerequisites:  Install and configure the binaries that need to create and manage an Amazon                      EKS cluster as below:

– Latest AWS CLI

– Command-line utility tool eksctl

– Configure Command-line utility kubectl for Amazon EKS

Referencehttps://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

Create the Managed node group cluster with the eksctl command line utility following the below command.

2. Create a Fargate Pod Execution Role

When the cluster creates pods on AWS Fargate, the pods need to make calls to AWS APIs to perform tasks like pulling container images from the Amazon ECR/DockerHub Registry The Amazon EKS pod execution role provides the IAM permissions to do these tasks.

Note:  To create the cluster, use the eksctl –fargate option to create the necessary profiles and pod execution role for the cluster. Note: If the cluster already has a pod execution role, skip this step to Create a Fargate Profile.

With a Fargate profile, a pod execution role is specified to use with the pods. This role is added to the cluster’s Kubernetes Role Based Access Control (RBAC) for authorization. This allows the kubelet that is running on the Fargate infrastructure to register with the Amazon EKS cluster so that it can appear in the cluster as a node.

The RBAC role can be setup by following these steps:

  1. Open the IAM in AWS Console: https://console.aws.amazon.com/iam/
  2. Choose Roles, then Create role.
  3. Choose EKS from the list of services, EKS – Fargate pod for your use case, and then Next: Permissions.
  4. Choose Next: Tags.
  5. (Optional) Add metadata to the role by attaching tags as key–value pairs. Choose Next: Review.
  6. For Role name, enter a unique name for the role, such as AmazonEKSFargatePodExecutionRole, then choose Create role

3. Create a Fargate Profile for the Cluster

Before scheduling pods running on Fargate in the cluster, a Fargate profile need to be defined that specifies which pods should use Fargate when they are launched.

Note: If we created the cluster with eksctl using the –fargate option, then a Fargate profile has already been created for the cluster with selectors for all pods in the kube-system and default namespaces. Use the following procedure to create Fargate profiles for any other namespaces you would like to use with Fargate.

Create the Fargate profile with the following eksctl command, replacing the <<variable text>> with the own values. Specify a namespace (labels option is not required).

$ eksctl create fargateprofile –cluster <<cluster_name>> –name <<fargate_profile_name>> –namespace <<kubernetes_namespace>> –labels key=value

4. Deploy the sample web application to EKS Cluster

To launch an app in EKS cluster, we need to deploy a deployment file and a service file. We then launch the deployment and the service in the EKS cluster.

Example:

$ kubectl apply -f <<deployment_file.yaml>>

$ kubectl apply -f <<deployment-service.yaml>>

The above creates a LoadBalancer to access the public part of the cluster.

After that, the details of the running service in the cluster can be viewed.

Example:

$ kubectl get svc <<deployment-service>> -o yaml

Observation:

Verify that the hostname/loadbalancer created as it is configured in the <<deployment-service.yaml>>.

Now the service can be accessed with the hostname/loadbalancer on the browser.

Simply type the respective hostname/loadbalancer in the browser to verify that the application is up and running.

Streaming CloudWatch Logs Data to Amazon ElasticSearch Service

Amazon ElasticSearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale ElasticSearch clusters in the AWS Cloud. ElasticSearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. Kibana is a popular open-source visualization tool designed to work with ElasticSearch. Amazon ES provides installation of Kibana with every Amazon ES domain.

Configure ELK with EKS Fargate

– Configure a log Group by following the steps provided by AWS at Log Group

– Subscribe the Log Group in CloudWatch, to stream data into the Amazon EKS

EKS Fargate is a robust platform that provides high availability and controlled maintainability in a secure environment. Because it runs the Kubernetes management infrastructure across multiple AWS Availability Zones, it automatically detects and replaces unhealthy control plane nodes, providing on-demand upgrades and patching with no downtime. This approach enables organizations to reduce time-to-market and remove the cumbersome burdens of patching, scaling, or securing a Kubernetes cluster in the cloud. Looking to explore this solution further or implement EKS Fargate Managed Nodes for your IT ecosystem? Connect with an Idexcel Expert today!

How To Build Business Intelligent Chatbots with Amazon Lex


Enabling Business Intelligence in Chatbots with Amazon Lex

In this fast-paced digital age, organizations need a fast and efficient way of gathering information. Especially in a customer-driven market like fintech, “time is money ”. Decisions will have to be made accurately and fast. Incorrect decisions can lead to severe consequences or lost customers. In several fintech applications, information is made available through reporting solutions, presentations, charts, etc. What customers find difficult is digging out the specific report or data needed through a multitude of mouse-clicks and then spending a lot of time analyzing them. There is a critical need for one central point from which a variety of data can be delivered to the user in an efficient and effective process. AWS technology and tools open several avenues to make this possible.

Amazon Lex – Machine Learning As a Service

Amazon Lex is one service that enables state-of-the-art chatbots to be built. It has redefined how people in the industry perceive building chat-bots. Bots themselves have gradually evolved from typical question-answering bots to more complex ones that can perform an array of functions. Amazon Lex offers features that tackle several complexities faced while building the previous generation of chatbots. The intent fulfillment, dialogue flow, and context management features of Amazon Lex help to make conversation with a chat-bot as human-like as possible.

This blog discusses how information can be retrieved from databases with a simple question asked to Kasper (the name of our bot). The following components of this blog will give a clear understanding to the user, how everything is built, networked, and coupled with a custom user interface.

Solution Architecture

Kasper is a chatbot built specifically for a lending platform to retrieve various data points based on specific inquiries. Like all bots, Kasper is also built on intents, utterances, and slots. After adding intents, its corresponding utterances, and slots, a few slots need to be added as custom slots. For example, there was a query – “show clients where invoice amount is greater than 20000”.  In the utterance section of Kasper, it was recorded as below:

 

Here ‘cola’ and ‘operatora’ are slot variables under custom slots ‘columnname’ and ‘operator’ respectively.

Natural Language to SQL Conversion

All the responses that require output from the database are sourced with the help of a lambda function. The JSON response from the lambda function contains the input transcript, intent, and slots information. The back-end application then receives the response from the lambda function, segregates the JSON, and classifies information into the corresponding intent and slots. The application extracts the slots and intents and then proceeds to build the query.

Responses from Kasper

Responses from Kasper can result in different formats of data. There can be single value responses, images, tables, etc. The types of responses are automatically determined from the intents. A custom website with a chat window has been developed for interacting with Kasper. The chat window can take in both texts, as well as audio inputs. The following are the detailed sections explaining each response type, with their corresponding chat window.

 

Response type I – Single values

There are instances where users might want to know about a sum or count or any other single value response. For example, an inquiry might be “count the number of clients whose due date is within 2 weeks” or “sum of the invoice amount of all clients“. The responses of these queries will be just a single value eg. “10,000”.

Response type II – Images and Tables

1. Tables

Images and tables are the next type of responses Kasper delivers. Once the SQL query is constructed, it connects with the database and retrieves data and stores it in a pandas dataframe. This dataframe can be exported as an html table for previewing through the chat window. It can also be downloaded in the form of a csv file.

2. Images

From the pandas dataframe, different charts/graphs can be derived. When an image response is expected, charts are generated using python libraries, saved to a file, and then exported to the chat window. Two types of images are generated – one is a thumbnail and the second is the actual image. Kasper is equipped with a feature named Auto-visualization. According to the dataframe, the function will decide what type of graph or chart has to be plotted. There are numerous rules applied before making that decision. For example, the function determines whether a specific column features continuous or categorical values. The resulting graph is plotted based on such combinations.

Response type III – Fallback mechanism with response card

The third type of response are response cards – a response to clarify the intention of the user. Suppose the user asks an ambiguous question like this “what is the amount of Apollo Inc. “. The chatbot will find the query to be missing some keywords because the user did not specify the type of amount (either invoice amount or balance amount). Kasper then prompts back with a list of possible options, so the user can select the appropriate option and receive the accurate result.

Kasper is a chatbot that has evolved to its current operational capabilities because of maximizing Amazon Lex’s potential and accommodating other significant AWS services to its architecture. Currently, Kasper can solve important natural language to SQL problems and a few FAQ questions as well. It can also be modified for other domain problems to suit specific needs. Over time, more capabilities will be possible to add and could serve as a first-line substitute for human support personal, freeing up your support team to help address more critical issues more quickly. If you’re interested in how a chatbot might improve your operations, schedule a Free assessment with our Machine Learning team today.

Want to learn more?

6 Business Continuity Strategies to Implement Post COVID-19

The health crisis of COVID-19 impacted businesses, people, and communities in numerous ways, causing us to change our strategies and the way we live going forward. This means that businesses are adapting to an incredibly new business landscape that’s changing the way we will work for the foreseeable future. Organizations are challenged with reinventing strategies, enabling virtual teams with remote workspaces, and exploring what’s possible for creating new innovations. Here are some key strategies to implement to accelerate business continuity and transition to a new working world:

Establish Your Team Leaders

The greatest asset any organization has are its people. Choose the team members with proven reliability, organization skills, and strong leadership qualities, especially under pressure. Situations like COVID-19 can prove to be stressful, so it is wise to choose to your Business Continuity team with these things in mind. Some roles you might consider designating specifically for Business Continuity purposes are Executive Business Continuity Manager (overall Team Lead), Communication Lead, IT Lead, Human Resources, Facilities/Maintenance, and Operations/Logistics Lead. These roles can depend on your specific business needs and internal departmental breakdown. Once you’ve decided your key players, it’s time to evaluate the primary business processes that need to continue in case of business disruption.

Document & Identify Critical Processes

From internal human resources processes like payroll processing, retirement plan administration, healthcare benefits to business operations such as supply chain management, customer support, operational processes, each of these requires certain access to various technology and secure applications. It is important to know if these processes will still be able to be performed with the current systems architectures and IT tools in place. That leads us into the next strategy, where we connect each process with existing resources in place to determine if the business continuity plan being developed will need specific changes, updates, or additions.

Identify Key Technology and Tools

Performing a proper assessment of current tools and technologies to validate capability will reveal where there might be gaps that need to be filled. One key question to consider is “Will these tools and technologies we currently have in place work in the case of a future change in working environment?” The answer to this will help identify what potential technologies or tools that might be needed in order to continue seamlessly operating with minimal disruption. Need help strategizing? Learn more about how to leverage cloud technology to improve business operations and increase performance efficiencies here.

Consider Contingency Technology and Tools

Is your system architecture set up for a new working structure for virtual teams? Is your cloud strategy crystal clear and strong enough to handle changing needs in terms of scalability and operations? Is it ready in case of another change in the working environment or future disaster? For example, it might be necessary to set up virtual workspace situations for employees. As a preferred AWS partner, Idexcel can help implement AWS Workspaces solutions in your organization – enabling business continuity by providing users and partners with a highly secure, virtual Microsoft Windows or Linux desktop. This setup grants your team access to the documents, applications, and resources they need, anywhere, anytime, from any supported device. Learn more about how we can help do that here.

Build A Customer Communication Plan

Communication with your staff, clients, and partners is perhaps the most important element of these strategies. The more they hear from you, the better off you will be with establishing trust and reliability. When communicating, be sure to follow these 3 guidelines:

1. Timing is everything. Responding quickly is key to establishing trust, visibility, and proactivity. It’s critical to be timely with messaging and depending on that communication sent, to give proper response and planning time to the recipient.

2. Be clear, concise, authentic, and provide value. Keep your communications simple and to the point. Create messaging that provides value, help, and support during any business changes or possible disruptions. Another key tip: keep it positive and avoid the use of negative words to evoke a more positive feeling and reaction to the communication. The more authentic and personable the messaging, the more likely you are to receive a positive response and invoke a sense of comfortability.

3. Leverage all communication channels. Social Media is a great way to keep in touch with your audience. Employees, clients, and partners alike are all very active, especially on LinkedIn, given it’s a key point of communication and connection digitally among professionals. Keep up with email communication with your teams internally as well, checking in often and also checking in how it may have impacted them.

Set Your Organization Up for Innovation

With a Business Continuity plan in place and the team assembled, now might be the time to consider strategically planning for innovative solutions. Specific technologies can be implemented to ensure accelerated business continuity measures are in place to better set your business and teams up for success.

For example, many organizations are adopting Machine Learning solutions with RPA (Robotic Process Automation). Many websites are using chatbots for answering general FAQs asked by the customers, eliminating the need for personnel to respond, and enabling them to focus on other tasks. They can positively impact the customer’s experience and are an ideal tool for short-staffed employers, saving thousands of hours of productivity and cost.

If you need help strategizing and creating your business continuity plan, get in touch with us to get connected with an expert.

Is Machine Learning the Solution to Your Business Problem?

The term Machine Learning (ML) is defined as ‘giving computers the ability to learn without being explicitly programmed’ (this definition is attributed to Arthur Samuel)Another way to think of this is that the computer gains intelligence by identifying patterns and data sets on its own, improving output accuracy over time as more data sets are examined. Since ML can be a challenging solution to implement, we’ve put together some foundational steps to assess the feasibility of building an ML solution for your organization: 

1. Identify the problem TYPE 

Start by distinguishing between automation problems and learning problems. Machine learning can help automate your processes, but not all automation problems require learning.

Automation: Implementing automation without learning is appropriate when the problem is relatively straightforward. These are the kinds of tasks where you have a clear, predefined sequence of steps currently being executed by a human, but that could conceivably be transitioned to a machine.

Machine Learning: For the second type of problem, standard automation is not enough – it requires learning from data. Machine learning, at its core, is a set of statistical methods meant to find patterns of predictability in datasets. These methods are great at determining how certain features of the data are related to the outcomes you are interested in.

2. Determine if you have the right data

The data might come from you, or an external provider. In the latter case, make sure to ask enough questions to get a good feel for the data’s scope and whether it is likely to be a good fit for your problem. consider your ability to collect it, its source, the required format, where it is stored, but also the human factor. Both executives and employees involved in the process need to understand its value and why taking care of its quality is important. 

3. Evalute Data Quality and Current State

Is the data you have usable as-is, or does it require manual human manipulation before introducing into the learning environment? A solid dataset is one of the most important requirements for building a successful machine learning model. Machine learning models that make predictions to answer their questions usually need labeled training data. For example, a model built to learn how to determine borrower due dates to improve accurate reporting needs a starting point from which to build an accurate ML solution. Labeled training datasets can be tricky to obtain and often require creativity and human labor to create them manually before any ML can happen.

4. Assess Your Resources

Do you have the right resources to maintain your ML solution? Once you have an appropriate question and a rich training dataset in hand, you’ll need people with experience in data science to create your models. Lots of work goes into figuring out the best combination of features, algorithms, and success metrics needed to make an accurate model. This can be time-consuming and requires consistent maintenance over time.

5. Confirm Feasibility of ML Project

With the four previous steps for assessing whether or not ML is right for your organization, consider the responses. Is the question appropriate for building an ML business solution? Is the data available, or at least attainable? Does the data need hours of human labor? Do you have the right skilled team members to carry out the project? And finally, is it worth it – meaning, will the solution have a large impact, financially and socially? 

It’s important to consider these key questions when assessing whether or not Machine Learning is the right solution for your organization’s needs. Connect with our ML experts today to schedule your free assessment.