All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all,  I've set up Splunk to receive Fortigate logs according to the guide on https://www.fortinet.com/content/dam/fortinet/assets/alliances/Fortinet-Splunk-Deployment-Guide.pdf  I've also set up... See more...
Hi all,  I've set up Splunk to receive Fortigate logs according to the guide on https://www.fortinet.com/content/dam/fortinet/assets/alliances/Fortinet-Splunk-Deployment-Guide.pdf  I've also set up a custom dashboard to make sure I'm receiving events properly.  Eg: this widget is supposed to count data usage on each policy.      index="main" (eventtype=ftnt_fgt_traffic) |eval sum(bytes) = bytes | eval bytes = bytes/(1024*1024*1024) | rename bytes AS RootObject.bytes policyid AS RootObject.policyid | fields "_time" "host" "source" "sourcetype" "RootObject.bytes" "RootObject.policyid" | stats dedup_splitvals=t sum(RootObject.bytes) AS "Sum of bytes" by RootObject.policyid | sort limit=0 RootObject.policyid | fields - _span | rename RootObject.policyid AS "Policy ID" | fields "Policy ID", "Sum of bytes"       What I've noticed is that the bandwidth it shows on Splunk is higher than what it shows on Fortigate policy.  Can someone advise me on what I am doing wrong and how to fix this? Let me know if you need any additional information.  Splunk Enterprise Version:8.1.2 Build:545206cc9f70 Thank you!  - Shenath 
I'm getting the following error while I'm trying to create a new add-on on the Splunk add-on builder app. Could anyone please help me with this? Traceback (most recent call last): File "C:\Program ... See more...
I'm getting the following error while I'm trying to create a new add-on on the Splunk add-on builder app. Could anyone please help me with this? Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\TA-custom-data-builder\bin\ta_custom_data_builder\aob_py3\modinput_wrapper\base_modinput.py", line 128, in stream_events self.collect_events(ew) File "C:\Program Files\Splunk\etc\apps\TA-custom-data-builder\bin\syncplicity_events_apitest_1614653961_672.py", line 84, in collect_events input_module.collect_events(self, ew) File "C:\Program Files\Splunk\etc\apps\TA-custom-data-builder\bin\input_module_syncplicity_events_apitest_1614653961_672.py", line 39, in collect_events auth_token = get_auth_token(helper) File "C:\Program Files\Splunk\etc\apps\TA-custom-data-builder\bin\input_module_syncplicity_events_apitest_1614653961_672.py", line 83, in get_auth_token auth_headers = get_auth_headers(helper) File "C:\Program Files\Splunk\etc\apps\TA-custom-data-builder\bin\input_module_syncplicity_events_apitest_1614653961_672.py", line 96, in get_auth_headers base64Encoded = base64.b64encode(str.format('{}:{}', opt_client_id, opt_client_secret)) File "C:\Program Files\Splunk\Python-3.7\lib\base64.py", line 58, in b64encode encoded = binascii.b2a_base64(s, newline=False) TypeError: a bytes-like object is required, not 'str' ERRORa bytes-like object is required, not 'str'
Security said Splunk Mongodb is vulnerable, it needs to be updated from version 3.6.17 to version 3.6.20. I already upgraded Splunk Enterprise to the latest version but Security said this did not upg... See more...
Security said Splunk Mongodb is vulnerable, it needs to be updated from version 3.6.17 to version 3.6.20. I already upgraded Splunk Enterprise to the latest version but Security said this did not upgrade the mongodb version. How do I upgrade mongodb?
I downloaded and installed Teams Add-on for Splunk and it worked for a while, until we encountered a lot of 404 error like below ERROR pid=14248 tid=MainThread file=base_modinput.py:log_error:309 | ... See more...
I downloaded and installed Teams Add-on for Splunk and it worked for a while, until we encountered a lot of 404 error like below ERROR pid=14248 tid=MainThread file=base_modinput.py:log_error:309 | Error getting callRecord data: 404 Client Error: Not Found for url: https://graph.microsoft.com/v1.0/communications/callRecords/<call ID>?$expand=sessions($expand=segments)   I found out that the callID was removed from Teams CDR for some reason,  therefore when Splunk tried to download the CDR, it returned error 404, which is understandable. However Teams Add-on will not remove the Call ID from webhook directory for this scenario. The call ID will remain there forever and Splunk will keep on trying again and again to download the CDR and failed. This results in a huge amount of call IDs that never get cleaned up and massive number of error messages in the log. Further more, i found out that if there were too many call ID files exist in the wehbook directory (~60K), the Add-on will encountered error "401 Unauthorized to download the CDR" and stopped working soon afterward. After restarting Splunk, the Add-on worked again and then stopped the  moment it hit 401 error again. I manually created a script to manage the load of webhook folder, so this is OK for now but it would be preferable that the Add-on has load management feature by itself. Hopefully the author of this Add-on will add this error handling soon, but meanwhile if anyone knows how to get around this 404 issue please kindly share. Thanks a lot!
I tried to install infoblox Intelligence (https://splunkbase.splunk.com/app/4472/ ) to Splunk 8.1.2. But Below message displayed and cannot work it. I know supported Splunk Versions is  7.3 and 7.2... See more...
I tried to install infoblox Intelligence (https://splunkbase.splunk.com/app/4472/ ) to Splunk 8.1.2. But Below message displayed and cannot work it. I know supported Splunk Versions is  7.3 and 7.2 Does anyone knows any solutions? ----------------- Unable to initialize modular input "infoblox_threat_intelligence_urls" defined in the app "TA-infoblox-intelligence": Introspecting scheme=infoblox_threat_intelligence_urls: script running failed (exited with code 1).. -----------------  
Example 1:  time="2021-02-26T04:20:27Z" level=error msg="[xx] failed processing case" caseNumber=1234 error="Received bad status code for case processing. StatusCode: [500]. Error: [{\"response\": \... See more...
Example 1:  time="2021-02-26T04:20:27Z" level=error msg="[xx] failed processing case" caseNumber=1234 error="Received bad status code for case processing. StatusCode: [500]. Error: [{\"response\": \"strconv.ParseInt: parsing \"\": invalid syntax\", \"status\": false}]." role=x spEnv=x spZone=x userID=x I want to extract string "Received bad status code for case processing. StatusCode: [500]. Error: [{\"response\": \"strconv.ParseInt: parsing \"\": invalid syntax\", \"status\": false}]." Example 2:  time="2021-03-01T23:50:02Z" level=error msg="[xx] failed processing case" caseNumber=13423 error="Received unexpected status code 400 for POST request. Path: /v1/user Request: {\"cloudZone\":[{\"cloud\":\"x\",\"zone\":\"x\"}],\"userDetails\":{\"userName\":\"x\",\"fullName\":\"x\",\"employee\":{\"employeeId\":\"x\",\"emailId\":\"x\",\"lockout\":false,\"enabled\":true,\"kkk\":false},\"description\":\"24784\",\"idNumber\":123,\"groupId\":123}} Response: {\"code\":\"\",\"reason\":\"[DS] Number 123 is already used\",\"request-id\":\"884F74A7-E249-1649-57B7-2C12E807DEDA\"}\n" role=x spEnv=x spZone=x userID=x I want to extract string "Received unexpected status code 400 for POST request. Path: /v1/user Request: {\"cz\":[{\"c\":\"x\",\"z\":\"x\"}],\"userDetails\":{\"userName\":\"x\",\"fullName\":\"x\",\"employee\":{\"employeeId\":\"x\",\"emailId\":\"x\",\"lockout\":false,\"enabled\":true,\"kkk\":false},\"description\":\"24784\",\"idNumber\":123,\"groupId\":123}} Response: {\"code\":\"\",\"reason\":\"[DS] Number 123 is already used\",\"request-id\":\"884F74A7-E249-1649-57B7-2C12E807DEDA\"}\n"
We are setting up our first dedicated search head.   Need some info of the Knowledge bundle replication process. 1) what is the maxbundlesize units ?   the default 2048  - is this kb, MB etc... 2) ... See more...
We are setting up our first dedicated search head.   Need some info of the Knowledge bundle replication process. 1) what is the maxbundlesize units ?   the default 2048  - is this kb, MB etc... 2) does this replication process copy all the files from the ../apps ../users ../system  to the search peer ?  So that if you update a app on the search head it will automatically updated it on the search peer ?  3) what log does this process write to? 4) Is there a way to manually run this process, like from the command line or will simply restarting the splunk service kick it off? Thanks    
Hi Splunkers, I"m working on a report where I have to write report on hosts that are not reported for a week. I used metadata to get all the hosts last reported time. Example: time         ... See more...
Hi Splunkers, I"m working on a report where I have to write report on hosts that are not reported for a week. I used metadata to get all the hosts last reported time. Example: time                                    host 1/3/2021                          a1 1/3/2021                          b1 28/2/2021                       c1 27/2/2021                       d1 24/2/2021                       e1 22/2/2021                        f1 22/2/2021                        g1 How can I edit the field time to report  hosts on 22/2/2021 (f1 and g1) ? Your answer would be helpful Thanks in advance.
  Database connection via DB connect in rising mode It was presented that logs stopped arriving for a range of 2 hours If the configuration is in rising, it is understood that only new records are... See more...
  Database connection via DB connect in rising mode It was presented that logs stopped arriving for a range of 2 hours If the configuration is in rising, it is understood that only new records are brought, and if there is no data in a time range, is it because the database does not have that information? Because that is what I am going to answer to the client, that he check directly in the database if the data in that time range exists or does not exist or should I check something else?
I'm a new splunk user and I'm trying to figure out how to get a distinct for the search query below. Any suggestions wo index=us_gssas_dev_app sourcetype="sql:database" Bot_Name="LoyaltyWaiver" NOT... See more...
I'm a new splunk user and I'm trying to figure out how to get a distinct for the search query below. Any suggestions wo index=us_gssas_dev_app sourcetype="sql:database" Bot_Name="LoyaltyWaiver" NOT Input_File_Name="" OR Item_Status="Success" OR Item_Status="Exception" OR NOT Item_Status=* OR Item_Status="" | stats count(eval(match(Input_File_Name,""))) as "Capacity" count(eval(match(Item_Status,"Success$$$$$$$$"))) as "Success" count(eval(match(Item_Status,"Exception$$$$$$$$"))) as "Exception" count(eval(match(Item_Status," "))) as "WIPF" count(eval(match(Item_Status, ""))) as "Processed" by Input_File_Name | eval Final_Processed=(Processed - WIPF) | eval WIP=(Capacity - Success - Exception) | eval Success_Percentage=(Success/Capacity)*100 | eval Success_Percentage= round(Success_Percentage, 0) | eval Success_Percentage = Success_Percentage + " %" | table "Input_File_Name" "Capacity" "Success" "Exception" "WIP" "Final_Processed" "Success_Percentage"
Hi Everyone, I have one requirement. I need to set one alert  when node response time is greater the 5000ms and that to 2-3 node response time should be greater then 5000ms. I have taken one not s... See more...
Hi Everyone, I have one requirement. I need to set one alert  when node response time is greater the 5000ms and that to 2-3 node response time should be greater then 5000ms. I have taken one not sure how to take multiple count of node response time Can someone guide me on that. Below is my query: index=abc ns=blazegateway (app_name=way-a)OR(app_name=way-b) (nodeUrl ="*") Trace_Id=* "*" | stats count by Trace_Id Span_Id ns app_name Log_Time caller nodeUrl nodeHttpStatus nodeResponseTime |rename caller as "Caller"|rename nodeUrl as "Node" |rename nodeHttpStatus as "NodeHttpStatus"|rename nodeResponseTime as "NodeResponseTime"| fields - count| |sort 10 -NodeResponseTime|where NodeResponseTime >5000 Thanks in advance
Hello All, I'm developing a Splunk app and was asked to use git for source control. I'm curious to know if anyone uses git for source control on a Splunk app and how they implemented it.  Any tips w... See more...
Hello All, I'm developing a Splunk app and was asked to use git for source control. I'm curious to know if anyone uses git for source control on a Splunk app and how they implemented it.  Any tips would be useful. Thank you, Marco
A search head has the following error message: Health Check: msg="A script exited abnormally with exit status: 3" input="./opt/splunk/etc/apps/SA-Utils/bin/data_migrator.py" stanza="default"   I... See more...
A search head has the following error message: Health Check: msg="A script exited abnormally with exit status: 3" input="./opt/splunk/etc/apps/SA-Utils/bin/data_migrator.py" stanza="default"   I am not sure which direction to take. We just upgraded to Splunk version 8.0.8, remaining on ES app version 6.0.1. The upgrade to the new Splunk version is when it began.
Hi, I have 4 different correlation ID's in same row, I want to split them and place it in single row 1aaaaa-2bbbb-3cccc-4dddd 5aaaa-6bbb-7ccc-8ddd 1eeee-2ffb-3cggc-4dhhd 5aiia-6jjb-7kkc-8lld  
I have a search that looks at AD data to determine if a user was disabled more than 6 months ago.  My intention was for the search to only produce results if the user was disabled more than 6 months ... See more...
I have a search that looks at AD data to determine if a user was disabled more than 6 months ago.  My intention was for the search to only produce results if the user was disabled more than 6 months ago, AND the user is still present in AD.  The the results would remind us to remove those users if they haven't already been removed.   The problem is that the results include users that were already deleted from AD.  I would ideally like to take the results and compare them to the latest import of data from AD (which happens once a day).  If the user is not present in the latest import, then we can assume the user was already removed, and that user should not be in the results.   Thanks for your help.  Feel free to suggest any optimizations to my search.     index="msad" sourcetype="msad:users" Enabled!="" | streamstats current=f last(Enabled) as new_Enabled last(_time) as time_of_change by SamAccountName | where Enabled!=new_Enabled | convert ctime(time_of_change) as time_of_change | rename Enabled as old_Enabled | dedup SamAccountName | where new_Enabled="False" AND strptime(time_of_change, "%m/%d/%Y %H:%M:%S")<relative_time(now(),"-180d@d") | table time_of_change, SamAccountName, old_Enabled , new_Enabled, DistinguishedName, AccountExpirationDate      
Hello,  Have upgraded to MS AD Objects v 403 and I have been through the baseline wizard successfully to create the new lookups. But no matter what I do, the information is Not up to date even just ... See more...
Hello,  Have upgraded to MS AD Objects v 403 and I have been through the baseline wizard successfully to create the new lookups. But no matter what I do, the information is Not up to date even just having finished running the build for the kv's.  This is the dashboard I am using to test AD Objects-Group-Group Member- Lookup I am successful for admon data and baseline data. OK Admon Baseline Sync Data Available. The AD_Obj_Domain Lookup has data.  Is there any way to troubleshoot this? Any help appreciated!
Hey All,  I have some questions about health.conf and web hooks. Recently I've been toying around with health.conf and testing some alerting. I noticed in my conf file I have alert_action.webhook. B... See more...
Hey All,  I have some questions about health.conf and web hooks. Recently I've been toying around with health.conf and testing some alerting. I noticed in my conf file I have alert_action.webhook. But I can't find anything in the documentation about it? What I would like to do is configure this to send an alert to a teams channel. Anyhow, if anyone has any information or done something similar, I'd like to hear about it. 
Good morning all, I am using Splunk Security Essentials 3.3.0 which reports to be compatible with 8.1. When we upgraded to Splunk 8.1.2 the "sseidenrichment" command stopped working.  So far, this ... See more...
Good morning all, I am using Splunk Security Essentials 3.3.0 which reports to be compatible with 8.1. When we upgraded to Splunk 8.1.2 the "sseidenrichment" command stopped working.  So far, this has been true for the default search "Products and the Content Mapped to Them" and general use. Are others experiencing this issue as well?
I'm looking to do a search that captures a snapshot of how many devices from certain subnets we have had going through our firewall in the last 20 minutes but for each preceding hour in the day. The ... See more...
I'm looking to do a search that captures a snapshot of how many devices from certain subnets we have had going through our firewall in the last 20 minutes but for each preceding hour in the day. The basic search for this is pretty straightforward:     index="firewall_std" src="10.19*.*.*" | dedup src  with the time selected as last 20 mins But for example if I run this search at 2pm with the time selected for today I want if possible to bring back what the count total was for the last 20 mins at 1pm, 12pm, 11am, 10am, 9am... etc. I've achieved it by appending another search for each hour and -20m (see below) but was wondering if there was a easier more streamlined way to do this?  | append [search index="firewall_std" src="10.19*.*.*"  earliest=-1h@m-20m latest=-1h@m | dedup src | stats min(_time) as _time count as Count | eval hour="02" | fields hour, _time, Count ]      
Hi Everyone, I have created one trend line panel. Below is the query for the same: index="abc*" OR index="xyz*" | eval raw_len=len(_raw) | eval MB=raw_len/pow(1024,2) | timechart sum(MB) as total_... See more...
Hi Everyone, I have created one trend line panel. Below is the query for the same: index="abc*" OR index="xyz*" | eval raw_len=len(_raw) | eval MB=raw_len/pow(1024,2) | timechart sum(MB) as total_MB by sourcetype In my chart I am able to see some bubbles. I don't want then. I want line. I have attached the screenshot. Can someone guide me on that.