I was very impressed with the Dreamforce Developers Keynote. They had a top-to-bottom demonstration of the new capabilities developers can use to drive what’s most important to them: users using their products. In this post, I will focus on the major takeaways from the keynote around making Apps with Lightning and Einstein.
To set the ground on what matters to developers, they describe key metrics of adoption success: Daily Active Users and Monthly Active Users. What developers need to do is build engaging apps that people will want to use. Satisfying their own preconceptions and checkboxes will lead to shelfware. Apps need to be made fast and need to be smart – like Google’s “hey did you mean to make an attachment?” notification. In order to do this, developers need to recognize that modern apps are becoming personalized experiences or journeys. Apps need to be tailor made to a user base and they need to adapt to the user’s needs and habits.
What’s so great about Salesforce for app development now?
First, they’ve taken care of all security issues on the backend, and this extends across all packages and components. Salesforce boasts the most trusted enterprise cloud with 400B+ transactions per quarter, 90B+ apex executions, 54B+ API calls, and 108M+ customer tests executed per release. Its compliance ranges from CSA Star, SOC 1-3, to list only a few, so you don’t need to worry about DDoS attacks or database intrusions.
Salesforce for Developers allows for solutions with code and no code. You can build basic apps very quickly using the lightning framework. The declarative builder tools can build the baseline of your app with a click and drag functionality. This includes application architecture such as metrics and navigation, as well as a component framework for security, accessibility, and data service. All of this is very easy to extend with customization and markup. You can add externally developed components into your app and they’ll work seamlessly with your own components and code.
The Dreamhouse app showcase went really far to demonstrate just how much can be done using the new tools with only a limited amount of code. For example, they used the new force record tag to re-populate a status bar’s attributes dynamically as they’re edited in the page’s fields. The app itself is an attractive, modern living accommodations search tool that gives you pictures and information in a swipe-able, tap-able interface. On the page for the broker, we see components are now talking to each other automatically; populating one field in one component, it gets updated in all the other ones without any extra effort or code. This wasn’t custom coded; it’s a native feature of all lightning components now.
I was super excited to see the demo of how to use Einstein to make apps smarter. Emily Rose gave us that demo and explained what tools are doing the work behind the scenes. Apache Kafka is used for event services. Machine learning is done by Apache PredictionIO and Salesforce Predictive Vision Service. To customize these algorithms, you can write in practically any language such as Apex, JS, Scala, and Python. Even before machine learning, there’s some basic stuff you can do with just process builder. For example, making a push notification when the price of a favorited item changed. Using Einstein, you can optimize the house selling price by calculating through the data set and using an intelligent calculation of what other similar houses sold for. If you’re looking to buy a house, after swiping through multiple houses right (yes) or left (no), you start getting recommendations based on what you liked.
On the backend, Einstein is doing the machine learning by instantiating Kafka producer, sending interactions in JSON through it. Then it creates a Kafka Consumer and subscribes to those interactions, which will be pushed into the database from which our algorithm can learn. Using two methods, train and predict, anyone with a little code savvy can fairly easily calibrate the learning algorithm. Emily showed how you can interact with the app through texting a bot. You can upload a photo of a house that you like that then gives you recommendations based on the trained algorithm. The code is surprisingly simple the algorithm was previously trained.
While they didn’t show all of the code behind the scenes, the feeling I was left with was that it was very doable and after an initial setup of particular algorithms, we could apply them across a plethora of use cases such as components and apps with little difficulty. What this looks like when it actually starts appearing in people’s components and orgs is a different question, but for now, it looks great!
Interested in learning more about making apps with Lightning and Einstein? Contact us today!