Sunday, July 21, 2024
HomeSOFTWAREDeep learning with TensorflowKrishna Rangasayee, Founder & CEO of - Interview Series

Krishna Rangasayee, Founder & CEO of – Interview Series

Krishna Rangasayee is Founder and CEO of Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the overall business, and Executive Vice President of global sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the foundation for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of public and private companies.

What initially attracted you to machine learning?

I’ve been a student of the embedded edge and cloud markets for the past 20 years. I’ve seen tons of innovation in the cloud, but very little towards enabling machine learning at the edge. It’s a massively underserved $40B+ market that’s been surviving on old technology for decades.

So, we embarked on something no one had done before-enable Effortless ML for the embedded edge.

Could you share the genesis story behind SiMa?

In my 20 + career, I had yet to witness architecture innovation happening in the embedded edge market. Yet, the need for ML at the embedded edge increased in the cloud and elements of IoT. This proves that while companies are demanding ML at the edge, the technology to make this a reality is too stodgy to actually work.

Therefore, before even started on our design, it was important to understand our customers’ biggest challenges. However, getting them to spend time with an early-stage startup to draw meaningful and candid feedback was its own challenge. Luckily, the team and I were able to leverage our network from past relationships where we could solidify’s vision with the right targeted companies.

We met with over 30 customers and asked two basic questions: “What are the biggest challenges scaling ML to the embedded edge?” and “How can we help?” After many discussions on how they wanted to reshape the industry and listening to their challenges to achieve it, we gained a deep understanding of their pain points and developed ideas on how to solve them. These include:

  • Getting the benefits of ML without a steep learning curve.
  • Preserving legacy applications along with future-proofing ML implementations.
  • Working with a high-performance, low-power solution in a user-friendly environment.

Quickly, we realized that we needed to deliver a risk mitigated phased approach to help our customers. As a startup we had to bring something so compelling and differentiated from everyone else. No other company was addressing this clear need, so this was the path we chose to take. achieved this rare feat by architecting from the ground up the industry’s first software-centric, purpose-built Machine Learning System-on-Chip (MLSoC) platform. With its combination of silicon and software, machine learning can now be added to embedded edge applications by the push of a button.

Could you share your vision of how machine learning will reshape everything to be at the edge?

Most ML companies focus on high growth markets such as cloud and autonomous driving. Yet, it’s robotics, drones, frictionless retail, smart cities, and industrial automation that demand the latest ML technology to improve efficiency and reduce costs.

These growing sectors coupled with current frustrations deploying ML at the embedded edge is why we believe the time is ripe with opportunity. is approaching this problem in a completely different way; we want to make widespread adoption a reality.

What has so far prevented scaling machine learning at the edge?

Machine learning must easily integrate with legacy systems. Fortune 500 companies and startups alike have invested heavily in their current technology platforms, but most of them will not rewrite all their code or completely overhaul their underlying infrastructure to integrate ML. To mitigate risk while reaping the benefits of ML, there needs to be technology that allows for seamless integration of legacy code along with ML into their systems. This creates an easy path to develop and deploy these systems to address the application needs while providing the benefits from the intelligence that machine learning brings.

There are no big sockets, there’s no one large customer that’s going to move the needle, so we had no choice but to be able to support a thousand plus customers to really scale machine learning and really bring the experience to them. We discovered that these customers have the desire for ML but they don’t have the capacity to get the learning experience because they lack the internal capacity to build up and they don’t have the internal fundamental knowledge base. So they want to implement the ML experience but to do so without the embedded edge learning curve and what it really quickly came to is that we have to make this ML experience very effortless for customers.

How is SiMA able to so dramatically decrease power consumption compared to competitors?

Our MLSoC is the underlying engine that really enables everything, it is important to differentiate that we are not building an ML accelerator. For the 2 billion dollars invested into edge ML SoC startups, everybody’s industry response for innovation has been an ML accelerator block as a core or a chip. What people are not recognizing is to migrate people from a classic SoC to an ML environment you need an MLSoC environment so people can run legacy code from day one and gradually in a phased risk mitigated way deploy their capability into an ML component or one day they’re doing semantic segmentation using a classic computer vision approach and the next day they could do it using an ML approach but one way or the other we allow our customers the opportunity to deploy and partition their problem as they deem fit using classic computer vision, classic ARM processing of systems, or a heterogeneous ML compute. To us ML is not an end product and therefore an ML accelerator is not going to be successful on its own, ML is a capability and it’s a toolkit in addition to the other tools we enable our customers so that using a push button methodology, they can iterate their design of pre-processing, post-processing, analytics, and ML acceleration all on a single platform while delivering the highest system wide application performance at the lowest power.

What are some of the primary market priorities for SiMa?

We have identified several key markets, some of which are quicker to revenue than others. The quickest time to revenue is smart vision, robotics, industry 4.0, and drones. The markets that take a bit more time due to qualifications and standard requirements are automotive and healthcare applications. We have broken ground in all of the above working with the top players of each category.

Image capture has generally been on the edge, with analytics on the cloud. What are the benefits of shifting this deployment strategy?

Edge applications need the processing to be done locally, for many applications there is not enough time for the data to go to the cloud and back. ML capabilities is fundamental in edge applications because decisions need to be made in real time, for instance in automotive applications and robotics where decisions must be processed quickly and efficiently.

Why should enterprises consider SiMa solutions versus your competitors?

Our unique methodology of a software centric approach packaged with a complete hardware solution. We have focused on a complete solution that addresses what we like to call the Any, 10x and Pushbutton as the core of customer issues. The original thesis for the company is you push a button and you get a WOW! The experience really needs to be abstracted to a point where you want to get thousands of developers to use it, but you don’t want to require them to all be ML geniuses, you don’t want them all to be tweaking layer by layer hand coding to get desired performance, you want them to stay at the highest level of abstraction and meaningfully quickly deploy effortless ML. So the thesis behind why we latched on this was a very strong correlation with scaling in that it really needs to be an effortless ML experience and not require a lot of hand holding and services engagement that will get in the way of scaling.

We spent the first year visiting 50 plus customers globally trying to understand if all of you want ML but you’re not deploying it. Why? What comes in the way of you meaningfully deploying ML and or what’s required to really push ML into a scale deployment and it really comes down to three key pillars of understanding, the first being ANY. As a company we have to solve problems given the breadth of customers, and the breadth of use models along with the disparity between the ML networks, the sensors, the frame rate, the resolution. It is a very disparate world where each market has completely different front end designs and if we really just take a narrow slice of it we cannot economically build a company, we really have to create a funnel that is capable of taking in a very wide range of application spaces, almost think of the funnel as the Ellis Island of everything computer vision. People could be in tensorflow, they could be using Python, they could be using camera sensor with 1080 resolution or it could be a 4K resolution sensor, it really doesn’t matter if we can homogenize and bring them all and if you don’t have the front end like this then you don’t have a scalable company.

The second pillar is 10x which means that there’s also the problem why customers are not able to deploy and create derivative platforms because everything is a go back to scratch to build up a new model or pipeline. The second challenge is no doubt as a startup we need to bring something very exciting, very compelling where anybody and everybody is willing to take the risk even if you’re a startup based on a 10x performance metric. The one key technical merit we focus on solving for in computer vision problems is the frames per second per watt metric. We need to be illogically better than anybody else so that we can stay a generation or two ahead, so we took this as part of our software centric approach. That approach created a heterogeneous compute platform so people can solve the entire computer vision pipeline in a single chip and deliver at 10x compared to any other solutions. The third pillar of Pushbutton is driven by the need to scale ML at the embedded edge in a meaningful way. ML tool chains are very nascent, frequently broken, no single company has really built a world class ML software experience. We further recognized that for the embedded market it’s important to mask the complexity of the embedded code while also giving them an iterative process to quickly come back and update and optimize their platforms. Customers really need a pushbutton experience that gives them a response or a solution in minutes versus in months to achieve effortless ML. Any, 10x, and pushbutton are the key value propositions that became really clear for us that if we do a bang up job on these three things we will absolutely move the needle on effortless ML and scaling ML at the embedded edge.

Is there anything else that you would like to share about SiMa?

In the early development of the MLSoC platform, we were pushing the limits of technology and architecture. We were going all-in on a software centric platform, which was an entirely new approach, that went against the grain of all conventional wisdom. The journey in figuring it out and then implementing it was hard.

A recent monumental win validates the strength and uniqueness of the technology we’ve built. achieved a major milestone In April 2023 by outperforming the incumbent leader in our debut MLPerf Benchmark performance in the Closed Edge Power category. We’re proud to be the first startup to participate and achieve winning results in the industry’s most popular and well recognized MLPerf benchmark of Resnet-50 for our performance and power.

We began with lofty aspirations and to this day, I’m proud to say that vision has remained unchanged. Our MLSoC was purpose-built to go against industry norms for delivering a revolutionary ML solution to the embedded edge market.

Thank you for the great interview, readers who wish to learn more should visit


Post Disclaimer

The information provided in our posts or blogs are for educational and informative purposes only. We do not guarantee the accuracy, completeness or suitability of the information. We do not provide financial or investment advice. Readers should always seek professional advice before making any financial or investment decisions based on the information provided in our content. We will not be held responsible for any losses, damages or consequences that may arise from relying on the information provided in our content.


Most Popular

Recent Comments

error: Content is protected !!