Backup Kubernetes Object to AWS S3 using Heptio Velero(Ark)

In this post i’ll show how to backup Kubernetes objects to AWS S3 and restoring this backup to another cluster or same cluster. This allows us to copy,move,restore our objects.

First of we need Velero in our system to do that download Velero binary (or we can make).

$ wget https://github.com/heptio/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz && tar -xvf velero-v1.0.0-linux-amd64.tar.gz

After that we have binary we need to install velero server to our cluster.

BTW: we dont need any cluster inside AWS. This credentials for backup objects. For this blog post i created DigitalOcean Kubernetes.

Velero is running yay!. Lets backup some objects. Every velero tutorial is using nginx backup and hey lets run nginx.
Lets see our backup location.

Velero github repo has example nginx yaml. Lets apply this. This yaml will create namespace,deployment and loadbalancer service for nginx. You may having problem with nginx:1.7.9 lets change it to latest for deployment.

Now we can create backup. For creating backup we are going to use velero binary


As we can see our backup is completed. Lets check out logs

Nice. Lets look our s3 bucket to see whats inside.

These files contains our deployment,service,namespace and our app logs json.

$kubectl delete namespace nginx-example
namespace “nginx-example” deleted

We already have backup so lets restore it. Also we can select different namespace to restore this.

As you can see restore completed.
If we will use persistent volumes this volume snapshot also will be restored. Have you noticed deployment pod names are same with backup.



Posted in Kubernetes | Tagged , , , | Leave a comment

CockroachDB on Kubernetes

In this post i’ll show how to deploy CockroachDB to your cluster and configure then create a example application to demonstrate how CockroachDB doing well on node/pod failures.

First of all we need to install CockroachDB. In official documentation shows installing using Helm but hey we need to see those yamls right? To do that we will use statefulset yaml which cockroachdb provides its own repo. You can configure your db secure or insecure. Secure is creating certificates and use tls for connection between nodes. In this post we will follow secure mod (because tls thats why).

This will create 3 replica of cockroachDB. But you will pods are pending in init status. Thats because its try to create certificate.

$ kubectl get po
NAME READY STATUS RESTARTS AGE
cockroachdb-0 0/1 Init:0/1 0 4m3s
cockroachdb-1 0/1 Init:0/1 0 4m2s
cockroachdb-3 0/1 Init:0/1 0 4m2s

Lets look one of them logs. As you can see we only see init-cert container logs because its trying to create certificiate

$ kubectl logs cockroachdb-0 -c init-certs

Request sent, waiting for approval. To approve, run ‘kubectl certificate approve default.node.cockroachdb-0’
2019-05-29 09:06:55.291231733 +0000 UTC m=+30.302293510: waiting for ‘kubectl certificate approve default.node.cockroachdb-0’

Lets approve this certificate with other pod as well. We can automate this of course.

$kubectl certificate approve default.node.cockroachdb-0

$kubectl certificate approve default.node.cockroachdb-1

$kubectl certificate approve default.node.cockroachdb-2

After this state we can our pods are running but readiness is failing for ‘HTTP 503’.
No worries because our installation is not finished yet.

We need to create job to start cockroachdb and thats create another certificate for default.client.root .

$ kubectl create -f cluster-init-secure.yaml
job.batch/cluster-init-secure created
$ kubectl certificate approve default.client.root
certificatesigningrequest.certificates.k8s.io/default.client.root approve

After this section you should see that pods are running and CockroachDB is running without problem.

As per documentation said we need to create client pod for accessing db. Basically we just need to ./cockroach sql –cert-dir=.. etc.

User is required for accessing UI. Lets create then.

As you can see we created our first user on CockroachDB. Lets look at web ui with this user.

We can also create service,ingress etc but lets just select one pod for db

$kubectl port-forward cockroachdb-1 8080

and login to https://localhost:8080/

Web UI CockroachDB

But most importantly we can see our cluster metrics nicely with latest events.

Cockroach UI Metrics page

Lets create some load. You can access cockroach using Postgresql driver which is awesome for developing applications. For this purpose lets use loadgen-kv as cockroach repo shows.

As you can see we have created a simple load generator

Now lets delete one pod and see how its acting. Since this pods using persistent volume claims they can easily recover just for getting new data after the init.

$kubectl delete pod cockroachdb-0

And observe how the requests handle.

As you can see our loadgenerator send request to our service. But when its connect to pod it always use same db. That’s same as you can only select one instance for postgresql driver (yes you can loadbalance them). As we deleted node-0 it will starts to starts to send request to node-2. And for a limited time latency is 3x than normal.

There is also a good stats page per sql query.

This page provides us to see is this query distrubuted, failed or latency…

Future Work:

  • Make distrubuted query/load generator and do this example again?
  • How will cluster handle if we have bigger query if one node is down ? Is this affect our nodes initialization time if so how long?
Posted in Uncategorized | Tagged , , | Leave a comment

Getting Notified From Radio Frequency using RTL_SDR and Opsgenie

In this article i’ll show how to notified through radio signals using RTL_SDR, ham radio and Opsgenie.

Setup and System

  • RTL_SDR usb stick
  • Ham Radio
  • Opsgenie account

There is a nice SDR scanner created by madengr which called ham2mon(https://github.com/madengr/ham2mon). Edited this python script to create alert when specific frequency and squelch value is matches with our desired one.

Basically i followed the Opsgenie Python SDK tutorial:
https://docs.opsgenie.com/docs/opsgenie-python-api#section-create-aler

Start and capture block i used these configurations:

./ham2mon.py -a “rtl” -n 1 -f 434360000 -g 20 -s -40 -v 0 -w


I said to capture 434.360Mhz Because its legally allowed to use ham radio with A class certificate(which i have 🙂 ) and squelch is -40Mhz so i wont be capture anything not powerful to capture.

Cool Part

Lets try it out ! I will speaking through 433.600(but hardware not rock solid so its .860).

ham2mon image when getting radio signals from 433.860Mhz

Cool lets see Opsgenie Alert

cool subdomain isn’t it


Whats Next

These will be on part 2

  • Using AWS Transcribe and notify when someone calling you through some frequency
  • Whenever your favorite music is on FM you’ll get alerts


Posted in Python, SaaS, Uncategorized | Tagged , , , , , | Leave a comment

AWS run tasks on EC2 without SSH, AWS Systems Manager

There are many ways to deploy your application or run commands inside EC2 instance. But to avoid security related threats ssh must be blocked(imho). Even inside private vpc its always good to be careful. AWS has a tool for this purpose and its called Systems Manager. It can do a lot of things besides accessing EC2 instance and do stuff inside. But this post will be about Systems Manager Documents.

First of all we need to give EC2 instances necessary IAM policy which is ssmfullaccess. After that we can create our instance using cli/console/sdk. But we dont open ssh port to anywhere on internet or vpc. In this post because i was used Amazon Linux 2 i dont need to download and install amazon ssm-agent. This tool is used for connecting and executing our commands to instance.

Security groups as shown we dont define ss

After this stage we should define our ssm-document. System Managers allows us to github,s3 or define on console actions for this. But i created a github repo and uploaded there.

Then we select our instance from tags or manually. There is last section and its for ssm logs. I choosed s3 bucket for logs. It’s so useful for debugging. Also you can use sns notification for success or failing commands to get notified.

Command status after ending with success

And command works successfully. Created nginx without accessing instance manually or using ssh.



Posted in AWS | Tagged , , , , , , , | Leave a comment

Serverless Function: Recognize dogs and send to Whatsapp using Twilio and AWS Rekognition

While i preparing my graduation project i think i need to add some kind of recognition for demo. For that purpose i used AWS Rekognition.

But obiviously thats not enough :).
Function is basically a simple flask rest service. Its expects image from formdata.
My Knative function recieve an image of animal and it sends to AWS Rekognition service to determine wheteter its dog or not. If its dog i upload this image to AWS S3 service with unique id(UUID). After upload image finished successfully i call twilio whatsapp service to send that dog image from S3 to my whatsapp number.

Github repository:
https://github.com/ffahri/serverlessrecognizedogs

Scheme:

Images:
Here we see twilio setup.

Posted in AWS, Kubernetes, Python | Tagged , , , , , , , , , | Leave a comment

Create and upload container images to AWS ECR with Kaniko inside Kubernetes

In this post i’ll show how to create container images inside Kubernetes using Kaniko and uploading to ECR repository.

First of all we need to configure kaniko for ecr url and aws credentials to work with ecr using iam. Kaniko config doesnt comes with aws ecr helper tag we should add our ecr fqdn and tag to this file.

kanikoconfig.json

Using kubectl creating configmap with this config.json

It also need aws credentials so we’r going to give this using kubectl create secret. Im going to use only ecr access iam credentials for this.

Kaniko.yaml

Here kaniko logs:

Lets check ecr repository

Posted in AWS, Kubernetes | Leave a comment

Creating Serverless Backend using AWS Lambda and DynamoDB

In this post I’m going to show how to create Lambda function for creating and showing items on DynamoDB.

Our diagram looks like this basically.

First of all we need to create our dynamodb table. I created using java sdk with these codes. This codes can be found in aws documentation.
Changed provisioned resources to 1 because yeah we just trying these no need much space.

Since DynamoDB is nosql database i also created a model for future items. Table is simple we have person name,surname and addresses

Add item function :

Lets invoke this function on lambda console
Yes i know you cant find james holden on Earth, maybe we should write Rocinante :).

Show item function:

Lets deploy and test this show method.
Created new stage as prod on apigateway and publish my methods there. Apigateway gave me a link for execute methods.

Conclusion:

Creating serverless backend easily with lambda is so much fun and so fast. You dont even think about servers/security(infra).

Posted in AWS | Tagged , , , , , , | Leave a comment

Using Traffic Shifting on Istio to make Blue/Green Deployments on Kubernetes

In this post i’ll show how to do blue/green deployements on Kubernetes using Istio.

After installation we should create our namespace and set istio sidecar injection. Im using namespace label with automated injection using

Our test yaml:

Applying this yaml using

Now we need to create istio gateway for our service and define route rules

These create gateway and virtualservice for our app.
After that we can use istio-ingressgateway to test our system.
We select v1 for all traffic we should always see blue webischia.

Monitoring of these services(thanks istio for taking care of all 🙂 )

And lets say we created new versino and want to publish but split traffic by %80 to %20.

As you can see our graphics shows traffic shifting is applied.

This happen without any server or client side errors.

Now lets our weights 80 green 20 blue.

And finally we can fully shift to green.

Conclusion:
Istio shows us we can do traffic shifting so much easy. Observabilty is also a big deal and super easy with istio.

Posted in Kubernetes | Tagged , , , , , , | Leave a comment

BareMetal LoadBalancer for Kubernetes using MetalLB

In this post i’ll show how to install and use MetalLB l2 loadbalancer for Kubernetes.

If you’re not using AWS,GKE or AKS you cannot passed over to pending state on LoadBalancer service on Kubernetes. MetalLB is tool to achive this and create a loadbalancer for your baremetal Kubernetes by supporting BorderGatewayProtocol and L2.

First of all i created DigitalOcean droplet on Frankfurt with 2G ram and installed Kubeadm.
MetalLB installation yaml firstly create namespace for metallb and creating and applying rbac for namespace.
After applying yaml we should check our pods and services to running without any errors.

Speaker sends arp requests and getting ip addresses for lb. Controller is allocate ip address to loadbalancer service for kubernetes.

Last step is we should give a config to metallb. In this config we define which protocol and which ip addresses will be use for service.

Im using kubeadm to creating this example. Only have 1 node and i want to publish to node ip address.
So what im going to do is giving 1 host cidr. “/32”

After applying it we can dig our speaker logs.

Our kubectl get svc is looks awesome

Service panel :
kubernetes-dashboard-service

Trying out:

Posted in Kubernetes | Tagged , , , , , , , | Leave a comment

Creating custom AWS EC2 images with Packer.io

In this post i’ll show how to create custom images for aws ec2 and launching them within seconds.

We need to create a json file to define our image settings.

After that we just need to build this image and packer do the rest for us.

Our output is:

Posted in AWS | Tagged | Leave a comment