All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to create a custom script that is a search command that can take in the search's results, do something, and then return the new results to splunk in a different language than python?
I'm really overthinking this, but am lost. I need to show when new correlation searches are introduced into the environment. I have a lookup with the current correlation searches, along w/ relevant... See more...
I'm really overthinking this, but am lost. I need to show when new correlation searches are introduced into the environment. I have a lookup with the current correlation searches, along w/ relevant data, and last time updated. How do I compare current rest call results with the lookup and show the most recently created searches? Any help would be appreciated.
I am working to upgrade SPLUNK Version from 8.0.1 to 8.2.2.1 (Solaris 11.3 O.S.). After the upgrade I see the below output, which is fine. pkg info -l splunkforwarder Name: application/splunkfor... See more...
I am working to upgrade SPLUNK Version from 8.0.1 to 8.2.2.1 (Solaris 11.3 O.S.). After the upgrade I see the below output, which is fine. pkg info -l splunkforwarder Name: application/splunkforwarder Summary: Splunk Universal Forwarder Description: Splunk> The platform for machine data. Category: Applications/Internet State: Installed Publisher: splunk Version: 8.2.2.1 Build Release: 191762265523787 Branch: None Packaging Date: Wed Sep 15 09:07:09 2021 Last Install Time: Fri Mar 25 17:43:36 2022 Size: 58.43 MB FMRI: pkg://splunk/application/splunkforwarder@8.2.2.1,191762265523787:20210915T090709Z when I try to start splunk I see the below error:  ./splunk start ld.so.1: splunk: fatal: relocation error: file splunk: symbol in6addr_any: referenced symbol not found Killed  Please any suggestion on this specific error ?  Thanks.  RB            
Does anyone have suggestions on integrating a SNMP enabled device into Splunk Enterprise?  I'm very new to Splunk and have been asked to integrate an SNMP enabled device into our Splunk Enterprise.  ... See more...
Does anyone have suggestions on integrating a SNMP enabled device into Splunk Enterprise?  I'm very new to Splunk and have been asked to integrate an SNMP enabled device into our Splunk Enterprise.  I think I need to somehow link a Forwarder to the device and have the Forwarder act as a receiver of device's information.  Once that data is in the Forwarder, I think it should be processed by an associated Indexer and then it should be available within Splunk.  Is that correct or do I misunderstand?
Hi all, I have a platform sending me events every 30 seconds, and will batch the events based on a distinct variable “tomatoes” and send to the relevant team every 10 mins as an alert. I wrote th... See more...
Hi all, I have a platform sending me events every 30 seconds, and will batch the events based on a distinct variable “tomatoes” and send to the relevant team every 10 mins as an alert. I wrote the below to  show management the total number of raw events vs the number of alerts being sent, based on historical data. I have now been asked to report on what the numbers would be if I throttled the alerts so that a distinct tomato would not create a new alert for 1 hour, and I have no idea how to do this. I don't need help with writing the alert, but I need help on creating a report. The throttled alerts have not been created yet, I need to figure out how to remove a distinct IP from the results for 1hour and then put them back in. index=* | bin _time span=10m | eval time=strftime(_time, "%m/%d/%Y %H:%M") | stats dc(tomatoes), count by time | rename dc(tomatoes) as tomatoes, count as tomatoes | table time, distinct_ tomatoes, total_ tomatoes | appendpipe [stats sum(distinct_ tomatoes) as distinct_ tomatoes sum(total_ tomatoes) as total_ tomatoes | eval time="Total" ] | appendpipe [where time!="Total" | stats avg(distinct_ tomatoes) as distinct_ tomatoes avg(total_ tomatoes) as total_ tomatoes | eval distinct_ tomatoes =round(distinct_IP,1), total_ tomatoes =round(total_IP,1) | eval time="Average"] time distinct_tomatoes total_tomatoes 03/24/2022 19:00 1 4 03/24/2022 19:10 1 2 03/24/2022 19:20 2 5 03/24/2022 19:30 1 4 03/24/2022 19:40 1 5 03/24/2022 19:50 3 5 Total 9 25 Average 1.5 4.2
I am looking to search in one Index for a specific field name and then use a second field from that Index to search a second Index for that value.  For example IndexA has field names Project and IRN... See more...
I am looking to search in one Index for a specific field name and then use a second field from that Index to search a second Index for that value.  For example IndexA has field names Project and IRNumber / IndexB has a field named InternalRequest IRNumber in Index A and InternalRequest in IndexB are the same values I would like to search IndexA by Project and then use the associated IRNumber from IndexA to search IndexB for the InternalRequest with the same value and then table various values from IndexB associated with that InternalRequest value.  Is there some way to use a sub-search to do this?
I've got a lot of LDAP users who no longer exist (500+ folders in /etc/users). What's the proper way to clean this? If I just delete the folder won't the search head cluster just resync it from the c... See more...
I've got a lot of LDAP users who no longer exist (500+ folders in /etc/users). What's the proper way to clean this? If I just delete the folder won't the search head cluster just resync it from the captain's running configuration? Also, someone pushed these using the Deployer when we migrated to 8.x. If I remove all user folders from the /etc/shcluster/users folder of the Deployer will anything bad happen next time I push? I have no desire to manage user settings with the Deployer; just push shcluster apps.
In a large environment with search heads in multiple locations, I would like to create a single dashboard that switches search queries values based on the search head URL.  This would allow us to hav... See more...
In a large environment with search heads in multiple locations, I would like to create a single dashboard that switches search queries values based on the search head URL.  This would allow us to have a single agnostic dashboard for multiple locations and/or environments. where the index values differ from one to the other.  Such as: https://splunk.NorthAmerica.Mycompany.com https://splunk.EU.Mycompany.com etc. token value = $URL$ Where I can set the locale token as: Locale=case($URL$ LIKE %NorthAmerica%, "NorthAmerica", $URL$ LIKE %EU%,Europe)  Is there any build-in splunk value that reads the requesting URL?  If not, how to construct a js that leverages the URLSearchParams() method to set the token? 
Greetings, I am writing a REST API call ingestion that requires authentication via first obtaining an auth token before the actual input, so I'm making use of the custom python code data ingestion b... See more...
Greetings, I am writing a REST API call ingestion that requires authentication via first obtaining an auth token before the actual input, so I'm making use of the custom python code data ingestion builder for the first time.  Having issues with the initial setting of a checkpoint for the start date of the api call specifically.  I figure I need to check to see if the checkpoint exists, and if it doesn't - set the check point to the provided date from within the Data Input settings.   When I try this with the below code snippet, I just get an error from the first line: AttributeError: 'NoneType' object has no attribute 'encode' ERROR 'NoneType' object has no attribute 'encode'   Not sure if there is a better way to set the initial check point somewhere that I'm not finding (this is in the collect_events function definition - so probably), or if there is a better way within python to handle the "if this doesn't exist, do this thing"     start_date = helper.get_check_point(input_name) if start_date is None: helper.save_check_point(input_name, helper.get_arg("data_ingestion_start_date")) start_date = helper.get_check_point(input_name) else: start_date = helper.get_check_point(input_name)     
Can someone help with Splunk Placeholder? What is Placeholder? How to create it? How does it work in lookup? How to make changes to existing Placeholder  
I'm looking to set a variable (customerLabel) depending on whether the user selects "framework" or "team" from a dropdown list. The token set with the dropdown is $grouping-name$. Where am I going wr... See more...
I'm looking to set a variable (customerLabel) depending on whether the user selects "framework" or "team" from a dropdown list. The token set with the dropdown is $grouping-name$. Where am I going wrong as customerLabel is not being set with a value at all.   I've included a snippet of the code below:     | eval teamCustomerLabel=case(issueLabel="customer1", "Customer 1", issueLabel="customer2", "Customer 2", issueLabel="customer3", "Customer 3", issueLabel="customer4", "Customer 4", issueLabel="customer5", "Customer 5", issueLabel="customer6", "Customer 6") | eval frameworkCustomerLabel=case(issueLabel="customer1", "Group 1", issueLabel="customer2", "Group 1", issueLabel="customer3", "Group 2", issueLabel="customer4", "Group 2", issueLabel="customer5", "Group 3", issueLabel="customer6", "Group 3") | eval customerLabel=case("$grouping-name$"=="framework", frameworkCustomerLabel, "$grouping-name$"=="team", teamCustomerLabel) | chart count(key) as "Created" over _time by customerLabel where top 50        
I find that the logs are not pointing to the right source/sourcetype. Logs are going to source= WinEventLog:Application and sourcetype="WinEventLog" instead of source="WinEventLog:Security" source... See more...
I find that the logs are not pointing to the right source/sourcetype. Logs are going to source= WinEventLog:Application and sourcetype="WinEventLog" instead of source="WinEventLog:Security" sourcetype="WinEventLog:Security". Can someone help me fix this to get the right source/sourcetype   index=*_windows (sourcetype="WinEventLog:Security" OR source="WinEventLog:Security") EventCode=1102 OR EventCode=517 Message="The audit log was cleared*" | bucket span=1h _time | stats count by _time, user, ComputerName, signature, index | eval index=case(index="***_appliances","***_appliances",index="***_windows","***_windows",index="***_linux","***_linux",1=1,index) | eval AF="0007" | lookup ****_Thresholds.csv index AF OUTPUT Threshold ID Sev | fillnull value="UNKNOWN" ID | fillnull value=9999999 Threshold | where count>Threshold | fields - index AF
hi I need to sort a field list which below with an uppercase letter followed by "- N" How to do please?
Splunk is so nice, they made config management systems thrice! The index manager, deployment server, and SHC deployer let you centralize configuration which can then be pushed to (or pulled by) the r... See more...
Splunk is so nice, they made config management systems thrice! The index manager, deployment server, and SHC deployer let you centralize configuration which can then be pushed to (or pulled by) the rest of your splunk infrastructure. But how do you manage the config on these configuration management systems? Is it simply someone SSH'ing into them and updating config files? Do you do something more sophisticated than that? Any and all answers welcome! The most charming answer will be selected as the best answer!
Hi All, I configured the MS add-on from a eventhub to gettin in splunk all security alert from Defender for cloud. seems  splunk can't collect some alerts I don't understand why. The eventhub is p... See more...
Hi All, I configured the MS add-on from a eventhub to gettin in splunk all security alert from Defender for cloud. seems  splunk can't collect some alerts I don't understand why. The eventhub is properly configured because I see all the logs from the eventhub also I see some security alerts but not all. the only thing give me a suspition is the eventhub have 3 consumergroup and the input is configured only one consumer group any helps?
Hi, I have 3 indexes. I need to extract hash_values from index 3 and do a search to see if similar files exists in index 1 & 2 as well. Index1 &2 has same field names, whereas index 3 has 3 dif... See more...
Hi, I have 3 indexes. I need to extract hash_values from index 3 and do a search to see if similar files exists in index 1 & 2 as well. Index1 &2 has same field names, whereas index 3 has 3 different fields with the values. Now I need to have all these values in a single field and then do a search to compare if similar files exists in other indexes  Details: ===== Index=1, sourcetype=1, hash_file Index=2, sourcetype=2, hash_file Index=3, sourcetype=3, hash_md5, hash_sha1, hash_sha256 Could someone please help me with a SPL?
Hi, I have configured the Infosec App in my splunk making sure that i had all the steps in prerequisites completed. It was working for a couple of days, but it suddenly sttoped showing data. I have... See more...
Hi, I have configured the Infosec App in my splunk making sure that i had all the steps in prerequisites completed. It was working for a couple of days, but it suddenly sttoped showing data. I have CIM for splunk and I can see in the health panel from infosec that the acceleration for the data models is working but I'm only recieving event and details from the Authentication and Change data model. going through this documentation https://docs.splunk.com/Documentation/InfoSec/1.7.0/Admin/ValidateDataSources#Identify_tagged_events_to_configure_the_data_models I have checked that only Authentication and Change are getting data, not the rest. If I try to follow the guide there is no tags for the rest of the datamodels. Is this why infosec stopped working? Can anyone help with this? Thank you. Regards
My data consists of individual messages, tagged with the userID of the user who sent them. I want to count the number of users who say "burger" and "fries", but not necessarily in the same message. ... See more...
My data consists of individual messages, tagged with the userID of the user who sent them. I want to count the number of users who say "burger" and "fries", but not necessarily in the same message. In the example UserID Message 1 "I'd like to order a burger" 2 "The weather is nice" 1 "I'd also like some fries" 2 "I'd only like a burger"   User 1 should be counted by user 2 shouldn't. I believe a way to do this would be inner joining by the userID on two separate searches       index=idx_chatbot_gb_p component=chatbot-ms | spath "userID" | spath input=payload output=Message path=messages.message{}.plaintext | search (Message=* burger *) | join type=inner userID [ seach (Message=* fries *) ]         I get zero results when I try this, even though I get results on the individual searches and many users order burgers and fries. Does anyone know a better way to do this or can spot what I've done wrong? Thanks
Hello, it is possible to get data from kafka into splunk saas?
Hi, How can we create an alert on database agent availability  similar to App or Machine agent. I am unable to find similar metric for DB Agent. Regards, Mohit