Prolance is a Protocol Surveillance tool that can monitor the protocols running on a particular server. So with the help of this tool, we can keep track of all the protocols. As of now, we have implemented this tool for two protocols those are Dynamic Host Configuration Protocol (DHCP) and Active Directory (AD). DHCP is a network management protocol that can dynamically provide the IP address to the client devices and Active Directory is the product of Microsoft that consists of several services that can run on the window server to manage permissions and access to network resources. To monitor the activities of the protocol we have worked on the activity logs generated by these protocols.
To process the logs we have two scenarios whether we can process the raw logs of the protocol or we can filter out the logs as per our monitoring needs Such as in DHCP we need to monitor the device name and the IP address provided to that device and in Active Directory we need the device name, active directory username and the IP address assigned to that device. After processing of logs, we have produced the raw or filtered logs on to the Kafka topic in compressed form. We have used the Gzip compression technique to compress our logs. After that, we have added the schedule to the program which can produce the desired logs on the Kafka topic in every given interval of time.
The major problems which we faced were as follows:
To monitor the activities on the running protocols we need to work on the logs generated by these protocols. The DHCP can save the activity logs in the log file but the Active Directory can save the activity logs into the security file of the server. Security logs exist in a file which is in .evtx format. To fetch the Active Directory logs from the .evtx file is a little tricky, for which we have used the evtx parser to parse the evtx file.
The logs given by the evtx parser are in key-value pairs, so we have used a JSON parser to parse that data. After that, we fetch the logs of Active Directory from that JSON data by using keys.
Kafka is not directly compatible with the window server, so to use the Kafka in the project we have to disable the default features [default_features + false] of Kafka. Then we want to produce the logs in a compressed form but due to disabling the default features the compression is not working. To solve that problem we need to add compression feature into the dependencies by [default-features = false, features = ["gzip"] ].
The benefits of working with Prolance:
Monitor the running activities on the network.
Monitor the pattern of the devices by its activities.
Stream the logs in the compressed form.
Schedule the program to get the activity logs in a specific duration.
Collect the filtered data from the Kafka topic on a real-time basis.
If you are looking to build this or similar solution (IOT with Rust), please get in touch or send us an email at firstname.lastname@example.org Knoldus has proven expertise in building reactive products with Scala, Functional Java ecosystem using Lightbend platform and solving big data challenges leveraging Apache Spark platform.
Knoldus helps the enterprise to migrate their legacy system to microservices by leveraging Apigee Edge platform.
If you are considering whether to go cloud native or not, here are all the reasons why you should stop procrastinating.
Conf-Count: Conference Monitoring System based on Image Recognition in Rust