Skip to main content

Monitor On Prem Resources From kube prom stack (Prometheus)

For this you would need Few Items

  1. Endpoints
  2. Service
  3. ServiceMonitor
---
apiVersion: v1
kind: Endpoints
metadata:
  name: onprem-proxy
  namespace: monitoring
subsets:
  - addresses:
    - ip: "192.168.10.10"
    - ip: "192.168.10.11"
    ports:
    - name: 'onprem-proxy-metrics'
      protocol: TCP
      port: 9100
---
apiVersion: v1
kind: Service
metadata:
  name: onprem-proxy
  namespace: monitoring
  labels:
    app.kubernetes.io/name: onprem-proxy
spec:
  ports:
    - name: "onprem-proxy-metrics"
      protocol: TCP
      port: 9100
      targetPort: 9100
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: onprem-proxy
  namespace: monitoring
spec:
  endpoints:
    - interval: 10s
      path: /metrics
      port: onprem-proxy-metrics
  namespaceSelector:
    matchNames:
    - monitoring
  selector:
    matchLabels:
      app.kubernetes.io/name: onprem-proxy

Comments

Popular posts from this blog

Add a Approval System in Jenkins For Build

Approval System in Jenkins For Build Use Cases: Only Specific users must be able to approve the build Speific users should be able to run the build without Approval Approval Can be turn off and On On-Demand Jenkins Variables Needs to Created under ( Manage Jenkins > Configure System > Environment variables ) ApprovalAdmins (Value: jenkins emails comma separated) skipApprovalUsers (Value: jenkins emails comma separated) BuildApproval (Value: True, False) import jenkins.model.Jenkins def getBuildUser() { return currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')['userId'] } pipeline { agent { label 'ec2-fleet-common' } stages { stage('Approval Process') { when { expression { env.BuildApproval == 'True' || env.BuildApproval == 'true' } } steps { script { ...

k8s rolling updates are not working

k8s rolling updates are not working Issue Whenever we were deploying a new release, pods were deleting to Fix no. like 2 then scaling up as per HPA. Cause Whenever we use replicas alongwith hpa and the deployment happens it first sets the pod count as per replicas, then hpa kick in and set the new values. To avoid this please remove or comment replicas in your yaml file. Relates Issues Old Pod is still running even after fresh deployment. Deployed Pod is still not created ( if only one pod was running 1). Relates Posts https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#migrating-deployments-and-statefulsets-to-horizontal-autoscaling

Create a read Only cli User for EKS

Use Case: When you want to provide access to users, you must always avoid prividing admin priviledges to users. This is needed for security and audit Purpose. Kubernetes allows you to create Rbac credentials using roles and cluster roles for service accounts, users, groups. From k8s: RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.   1. Lets First Create the cluster role and group Create file cluster-role-and-binding.yml --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: eks-readonly-group-binding subjects: - kind: Group name: eks-readonly-group apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: eks-readonly-group-cluster-role apiGroup: rbac.authorization.k8s.io --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: eks-readonly-group-cluster-role rules: - ap...