All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a query like below which would return a list of host names. index=osmetrics flock=xxx source=ps PID=1 | lookup xxx.csv host | stats latest(ELAPSED) as last_reboot by host  | eval rebo... See more...
Hi, I have a query like below which would return a list of host names. index=osmetrics flock=xxx source=ps PID=1 | lookup xxx.csv host | stats latest(ELAPSED) as last_reboot by host  | eval reboot_days=if(like(last_reboot, "%-%"), mvindex(split(last_reboot, "-"),0), 0) | search reboot_days=0 | fields host | rename host as search --------------------- Result: search ---------- host 1 host 2 host 3   I want to use the above query results as a sub-query like below: host IN [ index=osmetrics flock=xxx source=ps PID=1 | lookup xxx.csv host | stats latest(ELAPSED) as last_reboot by host  | eval reboot_days=if(like(last_reboot, "%-%"), mvindex(split(last_reboot, "-"),0), 0) | search reboot_days=0 | fields host | rename host as search ]   | timechart count by abcd which is host IN ( "host 1","host 2","host 3" )  | timechart count by abcd Please help me with the query to format the output of query 1 like ( "host 1","host 2","host 3" ) and use it as sub-query in query 2.
Hello Splunk community, I need to do one prediction for two different time ranges in different span in one report. The objective is making alert on the prediction of rate of messages: 1- from 5 am ... See more...
Hello Splunk community, I need to do one prediction for two different time ranges in different span in one report. The objective is making alert on the prediction of rate of messages: 1- from 5 am to10pm (span=10min) and 2- from 10pm to 5am (span=20 min). It can be really easy, but as I'm new to Splunk, I couldn't find a proper way for it. My base query is: |tstats latest(msg) as msg where `sws_logs_indexes` sourcetype=sws:sag:msgpartners host="p*" mp_name="Bessserver*" sag_instance="*SAG12" by _time sag_instance mp_name span=10m | stats sum(msg) as msg by _time sag_instance | streamstats current=false latest(msg) as previous_msg by sag_instance | eval rate=msg-previous_msg | timechart span=10m avg(rate) as "Server msg rate" | predict "Server msg rate" as prediction algorithm=LLP5 holdback=0 future_timespan=0 period=1008 upper75=upper75 lower75=lower75 |`forecastviz(24, 0, "Server msg rate", 75)` | eval isOutlier = if(prediction!="" AND 'Server msg rate' != "" AND ('Server msg rate' < 'lower75(prediction)' OR 'Server msg rate' > 'upper75(prediction)'), 1, 0) | where isOutlier=1 |table _time,isOutlier
Greetings All !! Hope you are doing well, I need your guidance/advice on the best way to implement zero trust security specifically on the Splunk servers and users to enforce strict access controls... See more...
Greetings All !! Hope you are doing well, I need your guidance/advice on the best way to implement zero trust security specifically on the Splunk servers and users to enforce strict access controls? Thank you in advance.
We have a 16GB Indexing License for one applicatiom , which for the first time we have exceeded the limit. I would like to know if there is a way to tell Splunk to stop this indexing input if the lic... See more...
We have a 16GB Indexing License for one applicatiom , which for the first time we have exceeded the limit. I would like to know if there is a way to tell Splunk to stop this indexing input if the license quota hits > 90%. Can we do it by Script or how and is there any other solution for this ? Thanks.
I have a search query that gives the supposed following results. Name WW Name2 Result Type Value Abc 50.5 Prod Pass A 1280 Xyz 47.2 Prod Pass ... See more...
I have a search query that gives the supposed following results. Name WW Name2 Result Type Value Abc 50.5 Prod Pass A 1280 Xyz 47.2 Prod Pass Dr Sound Abc 51.3 Test Fail     Def 8.2 Test Fail Td Wifi Def 44.2 Prod2 Pass Gf Printer Xyz 6.2 Test1 Fail Fr Audio Abc 451 Prod1 Pass Cs Audio   The values in the name column do not have a fixed value. There can be 10-12 different names. I want the above table to be shown as below Name WW Name2 Result Type Value Name WW Name2 Result Type Value Name WW Name2 Result Type Value Abc 50.5 Prod Pass A 1280 Xyz 47.2 Prod Pass Dr Sound Def 8.2 Test Fail Td Wifi Abc 51.3 Test Fail     Xyz 6.2 Test1 Fail Fr Audio Def 44.2 Prod2 Pass Gf Printer Abc 451 Prod1 Pass Cs Audio                           I would like to know is there any way I can show it as a table like above through query or through any changes in the XML
Will someone please confirm the exclusion/inclusion that occurs based on the below statement. The way I interpret the below statement is  *No events that occur on Monday or Thursday that occur befo... See more...
Will someone please confirm the exclusion/inclusion that occurs based on the below statement. The way I interpret the below statement is  *No events that occur on Monday or Thursday that occur before 07:00  *No events that occur on Monday or Thursday that occur after 09:00  *All events for other days of the week regardless of time *Exclude any events from the 1st day of the month regardless of day of the week or time |eval date_wday=strftime(epochtime,"%w")| eval day_sat=strftime(_time,"%A")|eval time=strftime(_time,"%H:%M") | eval Day1ofWeek = strftime(relative_time(_time,"@w0"),"%m/%d") | where NOT IN(day_sat ,"Monday", "Thursday") OR time < "07:00" OR time > "09:00" OR day_number !=1  
How can you see the search.log of a bd output? Good evening, it is required to validate the information of a certain db ouput that of the environment, no matter how much I search, I only find the in... See more...
How can you see the search.log of a bd output? Good evening, it is required to validate the information of a certain db ouput that of the environment, no matter how much I search, I only find the information of the execution time and results, but I would like to look more in detail as what is done in the jobs inspector, since currently there is an error in the environment and it does not appear in the internal either. Search Results might be incomplete. If this occurs frequently, please check on the peer. This error message I'm trying to search, but it only appears in ad-hoc searches and saved searches. Is it possible to search for these errors that appear inside a search.log file, via splunk query? PS: I only have web access to the splunk and I want to validate that a development to carry out presents problems due to these communication errors in the environment.
I have log files that append new data every five minutes starting with a timestamp, then dashes (-) then header, then dashes, then many events, and concludes with  dashes.  Splunk is not seeing the f... See more...
I have log files that append new data every five minutes starting with a timestamp, then dashes (-) then header, then dashes, then many events, and concludes with  dashes.  Splunk is not seeing the first timestamp in the file, but when it hits the timestamp 10 rows in, and subsequent ones, it picks up the timestamp and applies it to all events until it hits another timestamp.  Using universal forwarder on version 8. 2020-12-15 22:03:36,877 INFO [Timer-Driven Process Thread-3] o.a.n.c.C.Processors Processor Statuses: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | Processor Name | Processor ID | Processor Type | Run Status | Flow Files In | Flow Files Out | Bytes Read | Bytes Written | Tasks | Proc Time | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | Put file in HDFS | 857c16a7-5bf0-31e5-bd30-fa66d414d6af | PutHDFS | Disabled | 0 / 0 bytes (+0/+0 bytes) | 0 / 0 bytes (+0/+0 bytes) | 0 bytes (+0 bytes) | 0 bytes (+0 bytes) | 0 (+0) | 00:00:00.000 (+00:00:00.000) | | Update filename to keytab name | 34a833ad-8002-3e9b-8418-41191fc1f460 | UpdateAttribute | Disabled | 0 / 0 bytes (+0/+0 bytes) | 0 / 0 bytes (+0/+0 bytes) | 0 bytes (+0 bytes) | 0 bytes (+0 bytes) | 0 (+0) | 00:00:00.000 (+00:00:00.000) | | Extract username and hdfslocat | 61173591-acac-3706-95b2-d7abf5da3396 | EvaluateJsonPath | Disabled | 0 / 0 bytes (+0/+0 bytes) | 0 / 0 bytes (+0/+0 bytes) | 0 bytes (+0 bytes) | 0 bytes (+0 bytes) | 0 (+0) | 00:00:00.000 (+00:00:00.000) | | Fetch local keytab File | 06e85e30-73a7-30e3-a696-3a992f6f96ce | FetchFile | Disabled | 0 / 0 bytes (+0/+0 bytes) | 0 / 0 bytes (+0/+0 bytes) | 0 bytes (+0 bytes) | 0 bytes (+0 bytes) | 0 (+0) | 00:00:00.000 (+00:00:00.000) | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2020-12-15 22:08:37,154 INFO [Timer-Driven Process Thread-3] o.a.n.c.C.Processors Processor Statuses: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | Processor Name | Processor ID | Processor Type | Run Status | Flow Files In | Flow Files Out | Bytes Read | Bytes Written | Tasks | Proc Time | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | HandleHttpRequest | 63363733-77c0-16b6-b519-2a4a3d2e2530 | HandleHttpRequest | Running | 0 / 0 bytes (+0/+0 bytes) | 0 / 0 bytes (+0/+0 bytes) | 0 bytes (+0 bytes) | 0 bytes (+0 bytes) | 103357 (+10335 | 00:04:09.863 (+00:04:09.863) | | :64096 PUT, POST newForm, subm | faca87ea-0617-3cbc-b64c-d83f0420c5db | HandleHttpRequest | Running | 0 / 0 bytes (+0/+0 bytes) | 0 / 0 bytes (+0/+0 bytes) | 0 bytes (+0 bytes) | 0 bytes (+0 bytes) | 103274 (+10327 | 00:04:07.390 (+00:04:07.390) | | Run netstat command q40 | fe76dc61-46c0-3dec-8e38-edcb7c192197 | ExecuteProcess | Running | 0 / 0 bytes (+0/+0 bytes) | 5 / 10 bytes (+5/+10 bytes) | 0 bytes (+0 bytes) | 10 bytes (+10 bytes) | 5 (+5) | 00:00:00.802 (+00:00:00.802) | | Run netstat command q30 | ce56f2d4-c74e-3bd2-9474-1ed0b67adeeb | ExecuteProcess | Running | 0 / 0 bytes (+0/+0 bytes) | 5 / 10 bytes (+5/+10 bytes) | 0 bytes (+0 bytes) | 10 bytes (+10 bytes) | 5 (+5) | 00:00:00.801 (+00:00:00.801) | | Send to Ambari Metrics Collect | 9f953e4a-e292-3483-ae8c-cc826f60a46e | InvokeHTTP | Running | 10 / 2.33 KB (+10/+2.33 KB) | 0 / 0 bytes (+0/+0 bytes) | 0 bytes (+0 bytes) | 130 bytes (+130 bytes) | 10 (+10) | 00:00:00.374 (+00:00:00.374) | | ExtractText | 2d705770-c331-33ba-a0f2-b761284be861 | ExtractText | Running | 10 / 20 bytes (+10/+20 bytes) | 10 / 20 bytes (+10/+20 bytes) | 20 bytes (+20 bytes) | 0 bytes (+0 bytes) | 10 (+10) | 00:00:00.107 (+00:00:00.107) | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    
I was voluntold to install Splunk asap.  A VM was created with 2019 Datacenter.  I was "guided" by someone from another agency.  I downloaded and installed Splunk 8.1.1 and he walked me through the i... See more...
I was voluntold to install Splunk asap.  A VM was created with 2019 Datacenter.  I was "guided" by someone from another agency.  I downloaded and installed Splunk 8.1.1 and he walked me through the installation.  One of our primary reasons for installing Splunk is to be able monitor Active Directory.  I did NOT use an AD account when installing Enterprise.  I guess it just lets you install with a made-up ID.  So the questions are:  Can I monitor AD if I didn't install with an AD account?  If not, is the only option to reinstall?  
Greetings! We recently upgraded our UFs throughout the environment to 8.1.0, and since the upgrade, none of the Windows based forwarders appear to be doing AD GUID/SID-to-value lookups. We have ve... See more...
Greetings! We recently upgraded our UFs throughout the environment to 8.1.0, and since the upgrade, none of the Windows based forwarders appear to be doing AD GUID/SID-to-value lookups. We have verified that  evt_resolve_ad_obj = 1 is set in inputs.conf for the [WinEventLog://Security] stanza (verified with btool as well), and prior to the upgrade, the functionality was working fine. We tried installing the 8.1.1 version of the forwarder on one box as a test, but the problem persisted. Has anyone seen this or have any suggestions on what to check? This is a multi-site clustered environment running Splunk Enterprise 8.0.7. Thanks for your help!
I have been using the range picker for a long time to run a search against data ingested the previous day. I normally use the Date Range picker and select the date between yesterday’s date 00:00 and ... See more...
I have been using the range picker for a long time to run a search against data ingested the previous day. I normally use the Date Range picker and select the date between yesterday’s date 00:00 and yesterday’s date 24:00. This has worked fine for me. I was told that I can just use the "Yesterday" preset (or add earliest=-d@d latest=@d to the query). I know its obvious, but I missed it. I get different results if I use the preset "Yesterday" against what I have been doing with the date picker. This is not a minor difference. Can anyone think why this might be happening? Thank you!
Hello! What I'm trying to do is if check if any of the events meet a criteria and if so I want to assign all events a particular field and value. E.G. I want to check if any of the RequiredValue... See more...
Hello! What I'm trying to do is if check if any of the events meet a criteria and if so I want to assign all events a particular field and value. E.G. I want to check if any of the RequiredValue field has a value of "Yes". If so all events get the ConditionalValue of "Yes" if not they all get a value of "No" ID RequiredValue ConditionalValue 1 No Yes 2 No Yes 3 No Yes 4 Yes Yes 5 No Yes     Any help would be greatly appreciated. Thanks!    
I have data being fed to splunk in real time that I would like to tie to project IDs and budgets in a lookup table based on two criteria: time falls between start_time and end_time in  the lookup t... See more...
I have data being fed to splunk in real time that I would like to tie to project IDs and budgets in a lookup table based on two criteria: time falls between start_time and end_time in  the lookup table owner equals the owner in the lookup table Here's the example data: time owner Spent Notes 2020-10-26 10:06:00 Bill $30 Supplies 2020-10-26 12:16:41 Bill $10 Food 2020-10-27 06:30:51 Jeff $10 Food 2020-11-04 07:06:03 Bill $15 Fuel 2020-11-04 08:01:19 Frank $20 Fuel 2020-11-05 08:10:00 Bill $20 Supplies 2020-11-05 08:12:21 Jeff $10 Fuel   Here's the example lookup table: project_id owner budget start_time end_time 1e Bill $200 2020-10-26 08:00:00 2020-11-04 12:00:00 2b Jeff $200 2020-10-21 08:00:00 2020-11-06 12:00:00 4a Frank $100 2020-11-04 08:00:00 2020-11-22 17:00:00 2a Bill $200 2020-11-05 08:00:00 2020-11-10 12:00:00   This is the output I am looking for: time project_id budget owner Spent Notes 2020-10-26 10:06:00 1e $200 Bill $30 Supplies 2020-10-26 12:16:41 1e $200 Bill $10 Food 2020-10-27 06:30:51 2b $200 Jeff $10 Food 2020-11-04 07:06:03 1e $200 Bill $15 Fuel 2020-11-04 08:01:19 4a $100 Frank $20 Fuel 2020-11-05 08:10:00 2a $200 Bill $20 Supplies 2020-11-05 08:12:21 2b $200 Jeff $10 Fuel   I'm not really sure how to use the lookup command on a range, or if it's possible. Any suggestions/solutions are welcome. Thanks in advance!
index=dart_index source=DMZ_IncomingOutgoing status_message="OK" earliest=-48h@h | eval DeliveryComplete=strptime(delivery_complete, "%Y-%m-%d %H:%M:%S") | stats values(src_host) as Source, values(... See more...
index=dart_index source=DMZ_IncomingOutgoing status_message="OK" earliest=-48h@h | eval DeliveryComplete=strptime(delivery_complete, "%Y-%m-%d %H:%M:%S") | stats values(src_host) as Source, values(dest_host) as Destination, values(login_name) as DataOwner, values(host_name) as DartNode, values(xfer_type) as XferMethod, min(DeliveryComplete) as EarliestFileXfer, max(DeliveryComplete) as LastFileXfer by subscription_name | where now()>relative_time(LastFileXfer, "+24h@h") | eval DaysOld=round((now() - round(LastFileXfer, 0))/86400, 2) | eval EarliestFileXfer=strftime(EarliestFileXfer, "%Y-%m-%d %H:%M:%S") | eval LastFileXfer=strftime(LastFileXfer, "%Y-%m-%d %H:%M:%S") | table subscription_name Source Destination DataOwner DartNode XferMethod EarliestFileXfer LastFileXfer DaysOld   I have search that was created by a previous developer and it searches the entire index labeled " subscription_name" The problem is we only want to monitor a certain number of subscrption compared to the entire table of subscription in our DB.    
Hello!! I have a question about how to do something. Within an index I have a field called entity, this corresponds to companies to which we manage their products. In total we have 130 different en... See more...
Hello!! I have a question about how to do something. Within an index I have a field called entity, this corresponds to companies to which we manage their products. In total we have 130 different entities, and each entity has 5 different users, and the entity only sees its information in the reports, it cannot see the information of other entities. The information of all the entities is stored in a single index, doing 130 indexes to assign permissions seemed a little long task, and the provider in charge recommended for this, use loockups. We have a loockup that says the name of the user and the name of the entity, and with a token in the dashboards we filter the information. With this everyone sees what they need to see. We add the user field that comes from the loockup in the datamodel The problem we have is that when we enter a new user in the loockup, if the data model is accelerated, it never updates the information for this new user, if we do not accelerate the datamodel, the information is updated immediately. We tried to disable the acceleration and put it back, but it still didn't work for us, it kept bringing the same information before we made the change in the loockup. Another approach we did was to do an automatic loockup on the index, but the same thing happens, if it is not accelerated, it updates the information immediately, but if it is, it stays the same. And if we build the datamodel again, creating it with another name, but with the same root event and the same fields, if it brings the updated information.   What other suggestions would you have to do something like this? Or that they see that I am doing wrong so that the process does not work as I think it should work? I leave three additional images, one of the loockup, and another of the accelerated datamodel and without acceleration, so that you can see the differences. Thanks a lot!! Lookup With acceleration Without acceleration
I build a query to fetch the long running jobs in Dashboard like as below. Here the $Time$ is a token which was selected from dropdown menu in that panel.   > | rex field=_raw "ApplicationName:\s+\[... See more...
I build a query to fetch the long running jobs in Dashboard like as below. Here the $Time$ is a token which was selected from dropdown menu in that panel.   > | rex field=_raw "ApplicationName:\s+\[(?P<Applname>.*)];" |rex field=_raw "jobId: (?<jId>\w+);" | stats earliest(_time) as start latest(_time) as end by jId,sourcetype | eval diff=end-start |eval LB=$Time$*60 | eval UB=$Time$+1*60 | stats count(eval((diff> LB) AND (diff<UP))) as count|stats count In dashboard it is showing some numbers (3 long running jobs). But when I clicked on that number it is going to the search tab with below query and not fetching any results. > | rex field=_raw "ApplicationName:\s+\[(?P<Applname>.*)];" | rex field=_raw "jobId: (?<jId>\w+);" | stats earliest(_time) as start latest(_time) as end by jId,sourcetype | eval diff=end-start | eval LB=5*60 | eval UB=5+1*60 | stats count(eval((diff> LB) AND (diff<UB))) as count   But when I change the 'eval LB=5*60 | eval UB=5+1*60'  to     'eval LB=300 | eval UB=360' it is fetching the results.  Here I am confused, is this right approach or not. Can anyone suggest me on this.  
Hello,   Hello, I'm fairly new to Splunk and don't have any money for paid courses. I found this great book that seems to explain SPL clearly. I have downloaded and installed the trial version of ... See more...
Hello,   Hello, I'm fairly new to Splunk and don't have any money for paid courses. I found this great book that seems to explain SPL clearly. I have downloaded and installed the trial version of Splunk and I already completed the course that comes with the Splunk trial. I want to go through the tutorial in the book "Exploring Splunk" by David Carasso, Splunk’s Chief Mind but I cannot find the sample data mentioned on page 17 at this link http://splunk.com/goto/book#sample_data I've searched it for several days and cannot find it. Can anyone in the Splunk community help me with this? I would really appreciate it and I anticipate making great progress with it. Thank you
I need help on how to create splunk rule query to determine when volatility rate changes from low to high and to be able to be alerted. I have a splunk query below on the parameters/threshold that ar... See more...
I need help on how to create splunk rule query to determine when volatility rate changes from low to high and to be able to be alerted. I have a splunk query below on the parameters/threshold that are used to determine metric "Volatility"  Here it is: index=security sourcetype="Computers" "Computer Status"=Enabled earliest=-12mon@mon | timechart span=1day count | timechart span=1month earliest(count) AS count | stats avg(count) max(count) min(count)  
Hi Fellow Splunkers, I have an issue with triggered alerts failing to send email with authentication error (I use smtp). I found out that alert_actions.conf was mysteriously created under SPLUNK_HOM... See more...
Hi Fellow Splunkers, I have an issue with triggered alerts failing to send email with authentication error (I use smtp). I found out that alert_actions.conf was mysteriously created under SPLUNK_HOME/etc/apps/[appname]/local/ with below stanza: [email] auth_password = encrypted value This value takes precedence over system/local/alert_actions.conf and is the main reason why emails are not getting sent. These issues only come in my custom apps, search app is working as it should. This is a splunk cluster with 3 search heads, this issue is seen in every search head and local/alert_actions.conf is always automatically created even after I deleted them. Note that I always push changes from master node, so I can't explain why is there a file in my custom apps under /local/ directories. Any input would be appreciated, thank you!
Dear Michael (@jkat54), we successfully use your Splunk Addon SSL Certificate Checker Version 4.0.2 with the internal Splunk Certificates. Thank you for sharing. Now we had the idea also to check s... See more...
Dear Michael (@jkat54), we successfully use your Splunk Addon SSL Certificate Checker Version 4.0.2 with the internal Splunk Certificates. Thank you for sharing. Now we had the idea also to check some external Certificates, means certs on same server but not splunk certs. Unfortunately I don’t get this up and runnig. I tried to run the commands manually (see results below). ssl_checker3 worked ssl_checker2 failed I configured the location manually and through the UI. It seems a python module is missing, but I cannot find it. I run a fresh install of Splunk 8.1 on a Test System. splunk@ultra:~/etc/apps/ssl_checker/bin$ python3 ssl_checker3.py cert="/opt/splunk/etc/auth/cacert.pem" b'expires="Jan 28 20:26:54 2027 GMT\n' cert="/opt/splunk/etc/auth/appsCA.pem" b'expires="Jan 28 12:00:00 2028 GMT\n' cert="/opt/splunk/etc/auth/appsLicenseCA.pem" b'expires="Mar  8 12:00:00 2023 GMT\n' cert="/opt/splunk/etc/auth/server.pem" b'expires="Nov  5 12:20:38 2023 GMT\n' cert="/opt/splunk/etc/auth/splunkweb/cert.pem" b'expires="Nov  5 12:20:40 2023 GMT\n'   So if python is installed in the system, we can also use the app on UF. Thats fine!   splunk@ultra:~/etc/apps/ssl_checker/bin$ python3 ssl_checker2.py Traceback (most recent call last):   File "ssl_checker2.py", line 19, in <module>     import splunk.mining.dcutils as dcu ModuleNotFoundError: No module named 'splunk' Okay, the splunk python modules are missing. When I run with the splunk internal python it shows me the following. splunk@ultra:~/etc/apps/ssl_checker/bin$ /opt/splunk/bin/python3 ssl_checker2.py 'str' object has no attribute 'decode' The config Files look like this: splunk@ultra:~/etc/apps/ssl_checker/bin$ cat ../local/ssl.conf [SSLConfiguration] disabled = 0 certPaths = /cribl/local/cribl/auth/server.pem   splunk@ultra:~/etc/apps/ssl_checker/bin$ cat ../local/inputs.conf [script://./bin/ssl_checker2.py] disabled = 0   [script://./bin/ssl_checker3.py] disabled = 0 splunk@ultra:~/etc/apps/ssl_checker/bin$   So the problem seems to be with script ”ssl_checker2.py” and the error: “'str' object has no attribute 'decode'” Do you have an idea, what could go wrong and how we could track that down? Your help would be really appreciated. Kind Regards Thilo