Skip to main content

Microsoft Ldap login using python

Microsoft Ldap login using python3

Install dependent packages

python3 -m pip install ldap3

Sample Code to test login

from ldap3 import Server, Connection, ALL, SUBTREE
from ldap3.core.exceptions import LDAPException, LDAPBindError


def connect_ldap_server(SERVER_URI, DN,USERNAME, PASSWORD):

    try:

        # Provide the hostname and port number of the openLDAP
        server = Server(SERVER_URI, get_info=ALL)
        # username and password can be configured during openldap setup
        connection = Connection(server,
                                user='CN='+USERNAME+','+DN,
                                password=PASSWORD)
        bind_response = connection.bind() # Returns True or False
        return bind_response
    except LDAPBindError as e:
        connection = e
        return False
#        print(connection)
#    print(bind_response)

if connect_ldap_server('ldap://9.1.0.3','OU=Headoffice,DC=example,DC=com', 'testuser', 'testpassword'):
    print('User logged in successfully')
else:
    print('User log in was unsuccessful')

Comments

Popular posts from this blog

Add a Approval System in Jenkins For Build

Approval System in Jenkins For Build Use Cases: Only Specific users must be able to approve the build Speific users should be able to run the build without Approval Approval Can be turn off and On On-Demand Jenkins Variables Needs to Created under ( Manage Jenkins > Configure System > Environment variables ) ApprovalAdmins (Value: jenkins emails comma separated) skipApprovalUsers (Value: jenkins emails comma separated) BuildApproval (Value: True, False) import jenkins.model.Jenkins def getBuildUser() { return currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')['userId'] } pipeline { agent { label 'ec2-fleet-common' } stages { stage('Approval Process') { when { expression { env.BuildApproval == 'True' || env.BuildApproval == 'true' } } steps { script { ...

k8s rolling updates are not working

k8s rolling updates are not working Issue Whenever we were deploying a new release, pods were deleting to Fix no. like 2 then scaling up as per HPA. Cause Whenever we use replicas alongwith hpa and the deployment happens it first sets the pod count as per replicas, then hpa kick in and set the new values. To avoid this please remove or comment replicas in your yaml file. Relates Issues Old Pod is still running even after fresh deployment. Deployed Pod is still not created ( if only one pod was running 1). Relates Posts https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#migrating-deployments-and-statefulsets-to-horizontal-autoscaling