Ray is a contemporary open supply framework that means that you can create distributed functions in Python with ease. You’ll be able to create easy coaching pipelines, do hyperparameter tuning, knowledge processing and mannequin serving.
Ray means that you can create on-line inference APIs with Ray Serve. You’ll be able to simply mix a number of ML fashions and customized enterprise logic in a single software. Ray Serve mechanically creates an HTTP interface on your deployments, caring for fault tolerance and replication.
However there may be one factor that Ray Serve misses for now. Many fashionable distributed functions talk by means of Kafka, however there is no such thing as a out-of-the-box strategy to join Ray Serve service to Kafka subject.
However don’t panic. It won’t take an excessive amount of effort to show Ray Serve to speak with Kafka. So, let’s start.
To start with we might want to put together our native setting. We’ll use a docker-compose file with Kafka and Kafdrop UI docker containers to begin and discover our native Kafka occasion (so we assume that you’ve got Docker and Docker Compose put in). Additionally we might want to set up some Python necessities to get the work executed:
All necessities will be downloaded by this link.
Now we are going to create a ray-consumer.py file with Ray Deployment that might be served with Ray Serve. I cannot go into particulars about Ray Serve ideas, as you possibly can examine that within the documentation. Mainly it takes the same old Python class and converts it to a asynchronous service Ray Deployment with @serve.deployment decorator: