CNCF Webinar Series – Building Serverless Application Pipelines

CNCF Webinar Series – Building Serverless Application Pipelines



welcome everyone my name is Ian baretsky I'm a developer advocate at cloud native computing foundation at a warehouse and our regular webinar a lot of there within us from there within a Cirrus and today we have Sebastian Goss getting from bitNami will speak about their service implication pipelines I'd like to mention that we will have enough time at the end of the presentation at the end of the webinar to answer your questions or feel free to drop your questions to their Q&A section that you may see interface right now so Sebastian go ahead please thank you very much he hor and thank you everybody for joining I hope that you can hear me well and if not I'm sure that in your will will say something I'm broadcasting those slides full screen so I don't have any feedback on the chat so I don't know if if anybody is going to ask us a question during the the time that I'm speaking and demoing I won't see the question so you know you're please stop me any time if there is any technical difficulties or if there are any questions thank you okay so yeah as you're said my name is sebastian question i'm the senior director of cloud technologies at at bitna me I'm currently speaking to you from Geneva in Switzerland it's pretty cold there's a lot of snow and I'd like to talk to you about building server less application pipelines and what what does that mean exactly and how does server less plays a role within CN CF and how does the the solution that we are developing at bitNami called cublas how does that solution plays a role in the overall service landscape and cloud native landscape okay this is you know very much a talk that I gave at cube calm but I've of course modified a few things and I did some some new material and I hope you you'll like it first you know I do want to say a few words but let me really quick you know this is not meant to be a you know marketing marketing presentations I'll dive on on server less but if you don't know bitNami when we package application for all the major cloud providers like AWS Google as your we track 150 open-source application and we automatically build those application package them and deploy them onto onto those clouds we also build of ghost docker containers helm charts and so on with the same pipeline so we have lots of lots of expertise in in packaging those apps and delivering them to any platform our products we of course have an application catalog which helps us to build the marketplaces for the cloud that's probably what you you know bitNami bitNami for we recently just last week announced a new product called stack Smith which is not an open source product but I just mentioned it here and it's really the product ization of our internal build pipeline that we use to populate the cloud marketplaces so we've now productized our internal chewing into this tool called stack smith and of course today and and what you know us for in the communities ecosystem is all our open source efforts things like cube apps cue blasts our work with helm charts i I joined bitNami through an acquisition of skip box and where we had developed composed with a K which was one of the first software in the community's incubator and graduated and you can you can now find it in the humanities organization but I also develop cab in the mobile app which we just fixed so if you've been complaining about the latest bugs it's been fixed so the latest version is now out there and of course you know right now cube apps Cubase and we you know to do all of this we partner with Microsoft s ap pepto and and so on okay so that that's it for the nummy little little spill and you know I don't think we're going to spend an hour here that's that's a webinar I really would like to go to the to the essence you know not waste anybody time and and just go straight to the meat of it here what is server less and you know names really complicated to really complicated to to come up with with good names and server less you know unfortunately is creating a lot of confusion and even some anger in some circles what is server less and at the bottom you know at the end of the day you're calling a function which is a which is executing in some type of you know binary there is a Linux there is a process somewhere that's running and executing that function so server less there are servers behind it okay definitely there are servers behind it but you know what it means is basically that you are only deploying a very small business logic you don't concentrate on the infrastructure you don't concentrate on managing that infrastructure and you just pay for what you use which is not new in the cloud sense you know it's it's big loud has been a utility for a while but we serve a less you have a fine grain you know payment you know payment method if you wish so you only pay for the function call ok so here's you know that that came from from a tweet several years back there is no server less it's just someone else who's managing the execution environment of that function and you only pay a fraction of a cent for whenever the function runs so the hope is that one you don't have to manage any infrastructure you don't have to provision any in fry you don't have to manage it you don't know where it's running you don't know how you don't care and you only pay for each function call ok so cloud native of course this makes this mess this makes sense in the the cloud native sense you know that CN CF is really broadening its horizon started with kubernetes that just graduated today you know congratulation cuban it is but you know CN CF also recently just accepted detest the disputed database that powers youtube so you got a different type of software joining CN CF and there is a server less working group within CN CF and they define server less has you know it refers to the concept of building running apps that do not require server management it describes a finer grained deployment model where applications bundle has one or more functions are uploaded to a platform executed scale built on the man in response to the demand okay and i think that the biggest aspect is really the fine grain payment and the fact that you're not managing any service so a lot of people will say if it's a service that's being offered by a cloud provider like AWS lambda google cloud function as your function this is service because the users are not aware of any of the infrastructure and they don't manage anything and then those people will say if you're deploying something like us that we're going to talk about if you're deploying cublas on-prem and you're managing it then you're losing the server less approach and you're just accessing something which is function as a service a faz so the demarcation or the the border between the two would be if you are accessing a cloud provider service then it's or less if you're running it on pram then you talk fast bottom line you know that the real difference is in easing usage and it really depends on your user persona because if you're using server less something server less like us on Prem but that you're not the admin managing the cluster managing the solution then you know you're really not seeing all this infra so we could you know that's a beer description we could we could talk more about this the server lights working group went further than coming up with the definition there is a pretty intensive white paper that was written by by the working group but they also recently started working on the landscape so here is a snapshot of that landscape where you know you're trying to you you can see a lot of the solutions that play within the serverless ecosystem and you see at what layer they are so you know security you see sneak for example and then AWS Lamba in the platform layer all the way on the right in section that's called communities native you see some solutions like cube les functions platform 9 open files and so on I think this is going to evolve this is still very much a work in progress there's going to be you know improvements made to this do these diagram landscapes are hard it's very difficult to put things into a box for example you know in my in my viewpoint open files is not really Cuban it is native because it also supports SCORM so open files should actually be in the hybrid Colin and then function is not really being developed so I don't think I don't think it should be on there but you know that's that's the beginning of a landscape and definitely keep an eye on it you'll see you'll see where where things go ok so service who is the de-facto solution right now for server less and once again AWS is in the lead and that ws has been offering lamda almost since you know 2015 or so so it's been already couple years and what do you do with lamda you have small units of code small business logic on your computer that you upload into the cloud and that business logic is called upon when events are being emitted bar different event sources so it could be another AWS service like s3 bucket you put an object into an s3 bucket there is a notification that's being sent to an underlying message broker and that message is going to call the function that you upload it or it could be a very simple HTTP endpoint typically you're writing a web app your do you have different routes and when you call slash something then it calls your function okay so the the code that you're uploading in the lambda okay the function is triggered when events are being emitted and it's very important you know I feel like a lot of time when we talk server less we actually reduce it to web hooks we are very familiar with web ebooks it's an HTTP endpoint that you can call over HTTP and you get you know a response back but here with lambda it's actually any type of any type of function that's being wrapped you don't really know how but that function is being triggered by an event and there are lots of different types of event sources HTTP is just one way to you know where transport an event okay and it comes in the context of AWS you got kinases sqs SNS all the AWS services almost can emit events and trigger lambda okay so from the CLI what it does it look like it looks like this intentionally I just pasted the documentation from AWS AWS Lombok create function specify a region specify a function name you see the zip file that's where your code for your function is and it's interesting because lambda makes you actually create a zip and then that zip is uploaded to s3 and then the process that zip to create the running environment we'll get back to it there is a concept of runtime which is the the language timeout memory the handler information is just the base name of the file that contains your function and the handler is the actual function name okay the fact that you provide a zip here is interesting because that means that the first time you create a function the the time from making this CLI call to actually being able to call the function is going to be quite big the zip needs to be uploaded to s3 and then AWS has an internal build which is going to provision the environment where that function can execute that means installing dependency is creating isolation provisioning potentially some ec2 instances or containers you don't know okay you don't know that's the point it's server less you don't know what's happening but that first time is going to be a little bit long and then subsequently if you keep calling the function it will be extremely fast in the hundred millisecond maybe less type latency okay so that's the basic CLI for lambda and that's how you start deploying functions and building some type of much more complex application so the three concepts that I think are very important here our function endpoints okay you have a function you deploy it and then you can call that function somehow somewhere so you have an endpoint for that function then you have a trigger what is triggering the function is it an event that's happening in s3 bucket is it a stream of data in kinases is it a message in sqs you have lots of different type of triggers okay ensuring the Google Cloud you know is it a cloud pub/sub event is it a cloud storage event you name it they're lots of event sources out there and they're not you know there some there they have different semantics and things like this but there are lots of even sources and then the third one is definitely the event what's an event and that's where all the work that the server lies working group is doing on the on the cloud events spec is extremely important to note that the work was initiated by the server less calm startup Austin Collins and his colleagues really pushed hard on creating that spec but those are the three very important concept with triggers being you know very important there are lots of different solutions out there edible as lambda I mentioned the top left cloud function as your function and then you have solutions that you can install on Prem open wisk FN she's a solution for oracle nucleo that's the little Superman guy there open fast and then queue bless you know among among others there are new solutions that we've seen these days like riff from pivotal VMware has also you know their own thing this patch there are lots of different solutions that are being created and they all have you know pluses and the pluses and minuses you know but that's not really the the purpose of the talk here so I'm just going to concern it on cublas because of course that's the project that I work on but then try to really you know be a little bit more generic in the sense of you know what is the overall architecture of those systems and then you know how to deal with those events and so on q bless is being used and co-developed with the SI P and we have Blackrock as a big user if you want to know what what they do with Q bless you should look at the cute cone talk that I gave in in Austrian couple couple months ago okay so keeping it is native you you saw that in the in the server as landscape from CN CF that I that I saw and the reason why you know I I met a little you know a little poke at at open fast not being humanities native is because to make unit is native means that you're extending the communities API okay and I think that they move now to actually using a customer service definition and so on but they also that's because they have their own like facilities controller in some sort but it also runs on on dr. Swann so for me credit is native means that it's really an extension to Cuban itis and you can do that using something called custom resource definition which is a core API in Cuba datas and it allows you to create new rest at API endpoint and that's what we did in Cuba s we created a new REST API endpoint to manage to create function objects so doing this we simply now have a new object containing communities called a function so you have pods you have deployments you have services but now you also have a function then what you do with this you need to write a controller and that controller in our case creates the deployment creates a config map can create ingress and so on which are the you know the the core objects of any cuba latest application so we're really extending the communities API and making use of all the communities API primitives ok serverless is also very famous for providing you almost transparent auto scaling ok that's the that's the promise ok is that if you start calling your function a lot bang bang bang bang bang automatically the system will scale the number of available function endpoint and you'll be able to handle more load ok so how can you do auto scaling in Cubana T as well communities has horizontal Tod autoscaler HPA but the basic HPA runs on cpu and memory while we did in queue blasts is that we do auto scaling based on custom metrics it's very important because you know some functions you you want to scale based on function request and those functions may not be consuming a lot of memory or a lot of CPU but you may you may be calling them a lot ok so you want to be able to auto scale on custom metrics and using primitive which is also a project of course in ciencia if you see how this entire CN CF cloud native landscape comes together to help build a service solution that's really really great so with you we're using primitives monitoring inside our runtime to be able to create that auto scaling feature the last component that I'll mention is service mesh you know any East Joe so we recently did a proof of concept and definitely service mesh are a natural extension to this entire architecture ok because all the all the function and point are mini micro services and if you use a service mesh you'll have network encryption between all the functions that talk to each other you can potentially add authentication authorization for the function end point you have distributed tracing and so on okay so cublas has an extension to the communities API leverages the auto scaling capability leveraging the primitives monitoring from CN CF and will leverage you know service mesh will see if is Joe you know joint ciencia that at one point but it really you know becomes all a natural integration of all those systems to give you an easy way to deploy business logic very small business logic that can get triggered by any event sources and really you know the triggering on the events is is very very important so the architecture here you know I tried my best today to make a nice diagram for you you see that you have the q8 this API server that's extended with the function CR the endpoint once you you have the CRD that defines the function endpoint that means that then you can create function objects store them in the HDD other communities cluster this is all regular communities operation right in green you have a cube less controller that's where you have the magic of of cube less which watches the function objects being created when it sees an object created or deleted then it acts and if it's a creation that it created deployment service potentially an ingress and then stores the actual function business logic in a config map and at the end of the day what you end up with you end up with a running pod which is a running container and the function is injected into it so I'm talking a lot hopefully this is interesting I'm not I'm not seeing the questions are there any questions you're maybe if you're still if you're still there my dear we have only one question it's Klaus dice nur and then Vasu his last name escapes me but there they're working out of Waldorf Klaus is in the messaging group and also the sfe folks from Hybris the e-commerce platform they are integrating cublas in hybrids so that when you have any events in the Hybris ecommerce platform it can call functions that have been deployed cube less so go to the github page get up calm /q blasts /q blasts or join the cube less slack in Inc unities no the cube less channel in the community slack and you'll be able to reach out to them so here you know I wanted to make a little segue that there is no war between server less and and containers here you know the fact is that containers are just a really nice way for us to deploy functions okay functions as a service and there is no friction to have between functions and containers okay it's just that communities is a very nice platform to build on and and at the end of the day equivalencies manages containers so here the functions end up running in containers there is no there is no need to try to to you to have a war here between no server lesson and containers so the famous customers definition something you should definitely check in in Cuba natives keep control get custom resource definition and you'll see a functions that cube less that IO C Rd which then allows you to say cube control get functions and you'll see the list of functions and be able to deal with functions just like the way you deal with any other communities objects get the llamo get the Jason edit the function in place you name it okay the the functions are then you know manage because of this controller and here this is a little segue or a little hat tip to the Google cloud platform meter controller I think they changed the repost inside I wrote the slide but you know definitely the CRT controller architecture is is where a lot of humanities add-ons are going and if you're trying to build on top of communities you should become familiar with ER DS and writing a controller okay the monitoring I mentioned primitives from CN CF primitives has clients in lots of languages so if you're writing a docker image that runs for example here some Python code you can import the primitives client and then instrument your function instrument your Python program with primitives code and what that means is that then you know the container will export or will provide a an HTTP endpoint that will serve metrics and then primitives will scrape those metrics so that you can you can do auto scaling the one thing to keep in mind here and to see with with the link that I pasted on top is that all our runtime are instrumented with primitives that way we we actually expose functions metrics very closely to to the function itself and the runtime and this allows us to to build beautiful dashboard you know for all the functions in the in the system as I said you know with this custom with the this instrumentation then you can get custom metrics here I will mention that the the challenge we've had with auto scaling on custom metrics is that it was a little bit difficult to configure in communities one seven one eight and I'm very happy to see that with create is 1.9 you can now use communities autoscaler using custom metrics on gke okay it's a little bit it's still a little bit tricky you need to export your metrics to stag driver and so on but you can now do auto scaling on custom metrics it was a big 1.9 feature and something that they they managed to set up in in GAE so definitely pay you know pay some attention to this and note that it's a little bit tricky to set up on your own so I so be patient as you as you try that's the that's the architecture of course it's very important for any system to build any application to play with the the current ecosystem and if you go back again to the CNC AF server less landscape you'll see one of those solution in the ecosystem which is server less it's a node.js platform which builds an abstraction on top of all the surveillance provider lambda cloud functions as your functions open with and Q less Q less is one of the solution that's supported in server less and that allows you to basically write gamma manifests for your function and quickly deploy it using this dissolution the other thing that I wanted to mention is cloud events because in the concepts that I I talked about at the beginning I try to put some emphasis to the fact that serverless was more than just a way to deploy web hooks okay functions with an HTTP endpoint so that you can easily deploy simple web books it's way more than this you need to be able to trigger those functions from any event source and the example is you know messages in Kafka messages in Nats that IO messages any message broker but then any cloud services co-op cloud pops tried ml cloud storage s3 you name it all those events Microsoft event hub all those events right you have you have all these systems that emit events even Qube entities itself emits events and it would be great to you know of course trigger functions based on communities events so the big challenge is that there is no specification for an event that means that all the events that are out there have their own spec their own format and there is no you know there is no standard so there is an effort that is now you know called cloud events that aims to describe a spec for what is an event in a cloud native world okay and with that that means they will be able to build libraries SDK and so on that are there are there to the cloud event specification okay it's still very early okay let's let's be honest and you know every time I talk I try to tell people the level of maturity of things so that you know people don't get fooled kind of and jump on something when while it's still fresh definitely cloud events is still fresh and I'm expecting that there'll be there'll be some changes we're paying close attention to this in cubase and we actually just merged a change for all our runtime so that actions that are exposed by our runtime actually consume cloud events okay so here you see the basic spec you have an event you have context the context is specific context of the function so the function name time out runtime memory limit typically that what what is in the context is what you were providing to the to the to the CLI by deploying functions and then in the event payload you have the actual data plus even type timestamp you know and then potential extensions and so on okay so that's an example of cloud events I'm expecting this to change slightly over you know next month or half you know half a year as cloud events becomes no more mature but it's a it's a beginning to a standardization for what an event looks like so now yeah I should do actually do a demo otherwise I'm going to I'm going to lose you all so we'll we'll see but I did want to mention you know where we're going long term here with with Cube less and and the thing you know as you build those apps and you think about you know what are those apps made of you'll see that there will be lots of different event sources okay and a system like you guys need to be able to handle those event sources so we are almost done with a very important refactor where we will be able to add any type of event source as a new community CRD which will be associated with the own controller and then those controller will spawn consumers of events in whatever format those are and those consumers will call the function using the cloud event interface okay so this is probably out there this is the first time that I talked about this but it's a significant refactor it's a very exciting one because we we actually now have it and we have a Casca controller that runs inside Cuba notice because there is a trigger CRD for Kafka that's been created and when you when you tell the system that a function needs to be triggered by an event in a Kafka topic then the Kafka controller creates a Kafka consumer on the fly that Kafka consumer gets the message and then calls the function via HTTP so that means that every function is now exposed at HTTP with a cloud events compliant interface and then we can add any type of event source like at sqs controller kinases controller you name it okay so this is a very very exciting development but what's what's a service application really the best way you know as we talk about those pipelines what are those apps is to go to the AWS page you know I'm sorry for the competition but that's I'm a I'm a graphical guy so I like to I like to see pictures and if you go to the editor slam that page you'll see a list of case studies with you know very simple pipeline like this take a picture stick it in s3 when it's in s3 it triggers a function call to do a thumbnail or resize or you know create a PDF and then stick it into some other storage that's one pipeline another example data streams and processing you have lots of events that are going through kinases and each stream you know is associated with a lambda so events on a specific stream get processed by a lambda which then stores those events in dynamodb for example that's one you got mobile case study IOT case study optical character recognition you know you name it so how will you build those apps and that's really the vision you know of why we do this at a bit Nemean why we were interested in server less applications and why we are getting involved with this is because we see apps we see those pipelines as a combination of for example charts so a local service deployed in your on-prem communities cluster okay two local one deployed on Prem you're managing these communities maybe it's gke or aks on Azure you know it doesn't really matter but it's a it's a service that's deployed inside your own communities cluster plus some cloud services that you bring in thanks to the service broker and I haven't talked about the service broker but it makes a lot of sense it comes from the Cloud Foundry people we've integrated the service broker inside our solution that we call cube apps so the idea is to get an instance of a service from a cloud provider gets some bindings meaning username password some credentials to be able to talk to that service instance you get that inside Kuban it is okay so you have some local services you have some remote services and then how do you glue everything together you glue everything together with your business logic that is deployed as functions and triggered by events emitted by those services okay so that's really the architecture of the app you can see some of the early examples in our function store github.com cube less functions though this is going to go through a major major change so I hope that that gets you excited I'm going to show you a demo but clearly if you want to try it try cube glass you know the basic things look at the great cat I could cut I cut that playground days there is a cube less scenario for for categoría okay so let's get ready for for some demos I do have a demos here or but if there are questions let me know so if you are ready for questions right now we have one you cannot seduce continuously ask people to answer questions so sure she's asking as someone who knows we're a little about fast what does cublas do that improves upon base communities allows you to deploy functions without compiling or building a docker image okay there is no there is no docker build there's no there's none of that okay so let me let me show you the demo and then you know maybe you'll get you'll get a better a better sense for you know for for what it is so you should see you should see my my screen and I'm on the communities cluster on gke it's a relatively standard communities cluster one eight seven let's see if I get the list of notes back there you go it's three nodes I just bumped up the memory a little bit and I'm going to list all the pods in a namespace called cube apps so cube apps is a another open source project from vietname which we call a platform a package at gnostic application launch pad with cube apps you can lounge charts you can lounge functions you can now create instances of cloud services using the service broker and we are working on D tops like application so that whatever type of applications you're running in communities you can instantiate them manage them through cube apps so I'm going to show you the dashboard what do you see in cube apps you see a lot of helm related things like a tiller a tiller server there is an ingress controller there is a little database to keep the information about the charts there is the user interface for cublas there is an API which allows us to to have some basic REST API to tiller and then there's a dashboard okay so that's what I have running in my cluster I also have a cube last name space and what you see in the cube last name space you see a controller that's the thing that watch is the function endpoint and you see a basic Kafka and zookeeper set up if I look at my custom resource definitions you'll see that I have you know several including the functions one and if I do a cube City I'll get functions what you see now is that cutis is aware of functions but right now I don't have any okay so that's the set up of my current communities cluster I've bootstrap the entire environment with cube apps so what I'm going to do now is actually show you the cube apps interface for this I just use the cube app CLI I do cube apps dashboard and boom you should now see the cube apps dashboard okay what you see here is a list of charts those are the upstream charts from the Kuban ities slash charts okay so we are surfacing that chart repository you can add your own chart repository so you see here I added one for service catalog added one for functions here and so on once you find a chart you click on it and you can you can deploy so for example I could you know I could do Kafka Kong you name it usually I go all the way down and and I launch my little WordPress so you see I it's rendering the readme and here the keys that you have a button that says deep flow using helm so this is deploying the WordPress chart by talking you know to to tiller okay via a little a little proxy so you see that Knights deploying I'm Josh you know showing you this as a quick demo and then when you go to applications you see the applications that you have installed if I click on Mineo which i installed previously there is a URL I click on that URL opens the window and I have access to Mineo okay so this is just showing you that through cube apps you can deploy charts but you also have a function tab this is not yet nicely integrated but you know bear with us it's it's it's coming and here you can simply create a functions and say hello hello that CN CF for example and you see I'm going to trigger that function by HTTP it's going to be a Python function I'm going to create it it creates a very basic app which is CN CF return hello world okay if I had other functions on the Left I could refresh and I would I would see them okay but the bottom line is that underneath this is actually going to create a deployment create a service and here I see the logs the logs of what the logs of the pod that ended up created being created and when I call on the right I call the run function it tells me hello world ok and I see that in the logs of my app it triggered it with the get slash and it returned at 200 ok if I call again it should refresh the log ago okay so that's the basic that's the basic same thing through cube apps which bootstraps requestor with everything that you need not only service but also charts ok here I was able to create a super dead simple hello world you know out of the you know through through the UI so you can imagine you know what I was saying about those pipelines some bits of your apps will be deployed as charts some bits will come through the service broker and it's integrated in cube apps but it makes the demo you know much longer and then the glue code will be deployed yeah q bless okay if we go back to the CLI now and we clear the screen we say keep CT I'll get functions we now have a function here that's hello okay under the hood we also have a config map that's been created that hello config map that's where the function is stored so if I look at the yamo of that config map you see that the CNC f function here is actually stored in the in the config map okay so the when I created the function you know add the function API endpoint the controller pick that up and created a config map they also created a deployment so keep CTL get deployments you'll see you'll see that I have a hello deployment okay I can of course create a function from from the CLI and here you know if I I just show you the readme you'll see that the full CLI says cube less function deploy you give it a name you specify the file from file you specify the handler which is the base name of the the file where the function is handler is the function name you say what is Python we also can do node you know you name it function trigger and on HTTP it has some dependencies and so on so here I deployed a function through the CLI if I look at a get functions you'll see that I have here you know a crypto function created I look at my pods and I should see a crypto pod here you go and it's already running hopefully you see that and now I can actually call it I can say keep CTL function call crypto I think it's data and I can say crypto crypto and I said Bitcoin it should return except if I accept if it needs a single quote maybe it needs single quotes should I put single quotes yeah you go single quotes so the current price of Bitcoin is ten thousand nine hundred and ten US dollars okay and if you need you know another another type of crypto you go you have repo so now we you know not only can deploy through the UI that I like I showed you but we also deploy from the CI of course and if i refresh my UI you'll see that I have my crypto function and I see the actual code of my of my function now do you know I'll do one more I don't wanna I wanna you know spend too much of your time but one that is very interesting is that a function is actually a queue ATIS object and it's very important so you know you could actually write a manifest for your function API version will be cublas that IO V 1 beta 1 is just a new API group the kind is not pod or deployment its function this is only a variable if you install cube less if you don't insult cube less that that function kind doesn't exist this is only possible because of the CRD but you have metadata any metadata you can put annotation to a function you can put labels and you can write a spec of course so by default cube les is using you know some some basic you know spec but here you see the the function is part of the spec of the function object and then you can define a deployment so you know what are actually the specificity of the deployment that's being created by you know by by Cuba's so if you write your function object just like this then you can just create do a cube CGI create if you are a cumulative user you fall back on your feet this is just a normal object okay and here it says function ILO llamo created but because it's a function kind when I say you know cube less function LS whoops if I use the cube less CLI cube less functional s you see my hello dot llamo this is the function that I just created by doing a cube CTL create I hope you get I hope you get the idea you know that this is actually quite nice and that you know a function can also be just a pure a pure manifest and then you know you can call it like this cube last function call hello llamo and it says hello world okay so that one is quite nice do we have time for one more let's do the let's do the slack one and I don't know if okay to do the slack one let me let me do a little and so okay let me let me do it because I think it's it's an interesting one control C so I'll do one which is about events okay I'm going to create a function that's being triggered by a Kefka isn't so what's my what's my function it's it's just a function python function that is going to send a message to slack so you see that it's literally 20 lines you know the beginning is just like imports I'm getting my slack token from a community secret so I pre created pre created that and the actual function is just like four lines it's a you know the slack client API call it gets the message and just sends that to slack okay I'm going to create that function from the CLI so function deploy see I'm calling it slack event but for once this function is not triggered by HTTP it's actually triggered by an event on in Kafka okay sadly this is not using the new architecture that I mentioned it's still using the old architecture but it will be the the UX will be the same so here you're saying that the function is triggered by a topic okay you specify again the function from the file a handler little bit of dependencies as a file and then it's a Python function so we check that it created the object function LS so yes we have a function you see that now the type of the function look at the type call and at the bottom it says pub/sub so now it's a new new type of trigger it has some dependencies on the slack client and the curious Python client it says that it's ready so the pods should be ready so if I look at my pots I'll see that I have a slack event but they've been created quite quickly okay it's probably not as good as lambda but you know they get they get created quite quickly and we're now going to work on improving performance for you know cold and and warm up start by playing with the community scheduler and that and so on so now how do I send an event to that function cublas has a little convenience wrapper so I can do you know there's a topic sub command so I can say you know topic publish let me look at the help so topic publish what do we want to do we want to publish a message on the slack topic and we want to send some data saying you know what's up CN CN CF you haul Rox maybe in double quotes okay so what this did is there you go ah did you see at the top might hit my slack okay so I have one new message and one new message on slack and up what's up ch ch he haul rox okay so it did it did work let's do one more let me do what's going on not that one or is my terminal there you go let me let me make this one a little bit smaller and let's publish a new message thank you very much cube less rocks and we should see it on slack there you go thank you very much cube as rocks so that's it for me thank you for your time I hope that was interesting and what to keep in mind is that you know functions surveillance applications are made of function endpoints triggers and then event sources I mentioned that and then a pipeline is going to be made of local charts for example some remote cloud services coming from the service broker and then glue code deployed yeah fast and hopefully you'll use cube less so thank you very much whore perfect Thank You Sebastian we have four more minutes and we have one extra question in shirt so Amelia is asking other Cuba's containers get killed of the function execution order yeah right now they're actually you have one container that will be running continuously even if the function is not called okay that's because the scaling to zero is a is a hard problem I'm actually talking with Joe Beda we were exchanging you know some some messages he was telling me he has some new ideas on the subject but it's difficult to you know kill all the parts and and then resurrect them when the the function is actually being called okay you may need to play at the ingress layer I mean it's a it's a little bit more challenging so for now we always keep one container running and I do have to say that I don't think it's really that big of a deal some people make it to be a big deal but then you you actually look at some of the latest lessons learned from lambda and you see people faking calls to their functions to make sure that they stay up so that they don't have to suffer the cost of cold cold start okay simple so even you know if if even AWS users are actually now faking calls so that the functions don't don't get killed you know for cube less to keep one pearl running you know I don't think it's a huge cost right now but you know definitely we're we're keeping an eye on on the issue perfect oh we have minutes and we have time for one question so with a question about is Cuba's part of C and C F yet so I'd like to provide you soon I'd like to rephrase this question Sebastian sue can you describe your like wanted two minutes the general roadmap of cublas and really it's it's collaboration we CN CF itself yes the the general roadmap I mean Kubla is very much started as a proof of concept back after cube con 2016 so December 2016 there was some very interesting talks by Brandon burns and and Kelsey I came back from Q Khan and then the POC of hey how could we actually build F as function as a service on top of Cuba natives and then it turns out that people actually got very excited about it and we decided to keep on working on it and now we have you know strong support from sa P we have production use case with with Blackrock and others so we we do have quite a bit of traction and we've started some some big reef actors so that you know the project can be viable in the in the long run so in the roadmap definitely what I presented which is the new architecture I mean is this new it's slightly different with the main intent of supporting as many events sources as possible and server less to me is all about events so we need to be able to plug in any event sources that that could be out there okay so this should be merged within the next few weeks and should be part of the the bigquery's we're trying to get to a ga and and a one point or release before cube con in Copenhagen okay so we are on a two months crunch to get GA bye-bye Copenhagen and this will also you know include better run time for things like go and and Java because that's been a big ask from from the community people want to be able to deploy goal functions and and simple Java functions or maybe even provided as as jars so the you know those are have been the the two main work forces the decoupling of what we call it decoupling of events and rent on triggers and runtime which is now achieved and then support for golang and and java those will be the two for the two big things to get to two big things to get to GA and after that maybe maybe for maybe at the same time than then 1.0 we also do the the performance improvement which you know since gke now supports auto-scaling i've started to talk with Google to be able to see how we can improve the performance of cublas for you know cold start and you know warm up of the of the functions when we when we go to scale so hopefully you will have some very good data to showcase that at cute calm perfect thank you unfortunately we ran out of time so I think it's the person for this amazing webinar and showing us all the benefits of having the service applications running together with Cuba Nettie's I'd like to remind everybody that if you missed a part of this webinar you'd like to watch it again we'd like to share it with your colleagues and friends all the webinars are recorded and available at the cnc of the Deo website together with the upcoming webinars that are announced there as well so thanks everyone for joining us today and hope to see you next week okay thank you very much

Leave a Reply

Your email address will not be published. Required fields are marked *