All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day, I often run up against the issue of wanting to drag the text of a field name from the browser into a separate text editor.  Whenever I drag it, it works but it brings all the html metadata ... See more...
Good day, I often run up against the issue of wanting to drag the text of a field name from the browser into a separate text editor.  Whenever I drag it, it works but it brings all the html metadata with it.  Sometimes these field names are very long and so truncated on the screen its very tough without copying and pasting.   Has anyone found good work around for this?  Right now the field names, when dragged from the web browser into a text editor, comes through like this: https://fakebus.splunkcloud.com/en-US/app/search/search?q=search%20%60blocks%60&sid=1726153610.129675&display.page.search.mode=verbose&dispatch.sample_ratio=1&workload_pool=&earliest=-30m%40m&latest=now#https://fakebus.splunkcloud.com/en-US/app/search/search?q=search%20%60blocks%60&sid=1726153610.129675&display.page.search.mode=verbose&dispatch.sample_ratio=1&workload_pool=&earliest=-30m%40m&latest=now# Ironically dragged text field from splunk into this web dialog box work fine.
Hello, I'm not sure how to troubleshoot this at all.  So I've created a new Python based App thru the Add-On builder that is using a Collection Interval every 60 sec.  The App Input is set to 60 se... See more...
Hello, I'm not sure how to troubleshoot this at all.  So I've created a new Python based App thru the Add-On builder that is using a Collection Interval every 60 sec.  The App Input is set to 60 sec as well.  When I test the script which makes chained API calls that creates events based off of the last API call, it returns within 20 sec. The App would create about 50 events for each interval, when performing a Search, I would expect every 1 min to see about 50 events, but I'm seeing 6 or 7 per minute.   I ran the following query, and it's showing that the event time and index time are within ms.   source=netscaler| eval indexed_time=strftime(_indextime, "%Y-%m-%d %H:%M:%S") | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | table _raw event_time indexed_time When looking at the App log, I see it's only making the final API calls every 20 sec instead of all 50 of the final API calls within ms. Does anyone have any idea why this would occur and how I could resolve this lag that is occurring?   Thanks for your help, Tom    
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0... See more...
Hey All, Can anybody help me with optimization of this rex: | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" Example log: "#HLS# IID: EB_FILE_S, STEP: SEND_TOF, PKEY: Ids:100063604006, 1000653604006, 6000125104001, 6000135104001, 6000145104001, 6000155104001, STATE: IN_PROGRESS, MSG0: Sending request to K, EXCID: dcd, PROPS: EVENT_TYPE: SEND_TO_S, asd: asd #HLE# ERROR: "Streamed search execute failed because: Error in 'rex' command: regex="#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>.*?),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>.*?),\s*EXCID:\s*(?P<EXCID>[a-zA-Z_]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" has exceeded configured match_limit, consider raising the value in limits.conf."
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notifica... See more...
I am using AWS SNS to send notifications, but I am not able to find a way to send all the results that triggered the query. I see $result._raw$ option but it does not contain any data in the notification.  Can anyone please help to confirm how to send all query results to SNS? Thanks in advance.    
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concer... See more...
Hello Splunkers, I'm trying to push data to indexers from HF's where I have a syslog-ng receiving the logs. This is from a non-supported device therefore TA is not available on Splunkbase. My concern is when I'm writing inputs.conf can i just create one directory and call it cisco_TA and inside that create a directory called local and place my inputs.conf there ? is that sufficient to create a custom TA and transport the logs. Or should create other direcotires such as default , metadata, licenses ect.. Please if someone can advise on the above.   Thank you,   regards, Moh.
Hi, I have two fields, both these fields will be in two different events, now  i want to search for events, where aggr_id=*session_ID*, basically i'm looking to search for field1=*field2* field1: s... See more...
Hi, I have two fields, both these fields will be in two different events, now  i want to search for events, where aggr_id=*session_ID*, basically i'm looking to search for field1=*field2* field1: session_ID= 1234567890 field2: aggr_id= ldt:1234567890:09821
my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splun... See more...
my free 60 days trial got expired and now I have updated the license to a free trial, but now I'm unable to use search head of Splunk.  each time I get this "Error in 'lit search' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK."   I have already updated the license and restarted the application. Please help me with this. 
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the a... See more...
I have an alert which I am trying to throttle based on few fields from the alert on condition if it triggers once then it shouldn't trigger for next 3 days unless it has different results. But the alert is running every 15 mins and I can see same results of the alert every 15 mins. My alert is outputting the results to another index Example: blah blah , ,  , , ,  , , ,  | collect index=testindex sourcetype=testsourcetype Based on my research, I came across a post where it says  Since a pipe command is still part of the search, throttling would have no effect o because the search hasn't completed yet and can't be throttled. I think this because the front end says After an alert is triggered, subsequent alerts will not be triggered until after the throttle period, but that doesn't say "they aren't run" Is it the case? If so how can I stop updating the duplicate values in my index
I am in the middle of a Splunk migration. One of the tasks is to moved data from some sourcetypes onto the new servers using the | collect index=aws sourcetype=* command. The numbers added up after... See more...
I am in the middle of a Splunk migration. One of the tasks is to moved data from some sourcetypes onto the new servers using the | collect index=aws sourcetype=* command. The numbers added up after running checks. I run the same checks again a day later and the numbers no longer match up. Source 1 -> Old Splunk New Splunk Source 2 -> Old Splunk New Splunk August 12,478,853 12,478,853   26,171,911 26,171,911   24 hours later Source 1 -> Old Splunk New Splunk Source 2 -> Old Splunk New Splunk   12,478,853 12,477,696   26,171,911 3,001,183   I've set the following stanza within the indexes.conf file on the deployment server. Also the index only contains 22gb of data. Can you help? [aws] coldPath = $SPLUNK_DB\$_index_name\colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB\$_index_name\db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB\$_index_name\thaweddb frozenTimePeriodInSecs=94608000
Does anyone have an example of a coldtofrozenscript to be deployed in a clustered enviorment, I'm weary of having duplicate buckets etc?
  index=test | table severity location vehicle severity  location vehicle high Pluto Bike   testLookup.csv severity location vehicle high octane Pluto ... See more...
  index=test | table severity location vehicle severity  location vehicle high Pluto Bike   testLookup.csv severity location vehicle high octane Pluto is one of the planet Bike has 2 wheels   As you can see on my table i have events. Is there a way to compare my table events to my testLookup.csv field values without using lookup command or join command ? Example. if my table events severity value have matched or has word same as "high" inside the severity in lookup field severity value then it is true otherwise false. Thank you.
Hello,  we have decided to retire SPLUNK and the server that SPLUNK was running on. If the server is decommissioned, do we still need to decommission SPLUNK - or would one equal the other? If it wou... See more...
Hello,  we have decided to retire SPLUNK and the server that SPLUNK was running on. If the server is decommissioned, do we still need to decommission SPLUNK - or would one equal the other? If it wouldn't, is there a way to still decommission SPLUNK after the server has been decommissioned? Thank you.   
Good day guys, Need to know how SVCs are actually getting calculated? With examples please! I have already gone thru splunk docs n yt vids but still wanted to know how SVCs figure are getting concl... See more...
Good day guys, Need to know how SVCs are actually getting calculated? With examples please! I have already gone thru splunk docs n yt vids but still wanted to know how SVCs figure are getting concluded? Kindly suggest Thanks in advance
Hi all,  I am trying to show the connected duration, which is calculated using transaction command in a timechart. When I try below query, the entire duration shows in the earliest timestamp(start... See more...
Hi all,  I am trying to show the connected duration, which is calculated using transaction command in a timechart. When I try below query, the entire duration shows in the earliest timestamp(start time) as a single column. I would like to show the connected duration in a column chart, with area between start and end time colored.  For example, if device was connected from 20th August to 23rd August, I want the column to extend across these days. Currently, the entire duration is shown on the 20th date alone. Kindly let me know your suggestions to implement this. Query: | transaction dvc_id startswith="CONNECTED" endswith="DISCONNECTED" | timechart sum(duration) by connection_protocol
Hello, I am running Splunk Enterprise 9.2.2. I am trying to install Python for Scientific Computing for Windows as I am running it on a Windows Server.  Python for Scientific Computing (for Windo... See more...
Hello, I am running Splunk Enterprise 9.2.2. I am trying to install Python for Scientific Computing for Windows as I am running it on a Windows Server.  Python for Scientific Computing (for Windows 64-bit) | Splunkbase However, I am getting the following errors when I try installing the application. I tried with the tgz file, and also with the extracted tar file but both has the same issue. It looks like the webpage at https://localhost:8000/en-US/manager/appinstall/_upload might be having issues, or it may have moved permanently to a new web address. ERR_CONNECTION_ABORTED Is it due to the file size being overly huge? And what could be the solution? Thanks
I have a Splunk cloud instance that receives log from Linux server that has a Splunk Heavy Forwarder on it. I am trying to update the Forwarder to 9.3.x, but found online I should step to 9.2.x firs... See more...
I have a Splunk cloud instance that receives log from Linux server that has a Splunk Heavy Forwarder on it. I am trying to update the Forwarder to 9.3.x, but found online I should step to 9.2.x first. It appears on the server that it's updated, and running the Splunk 9.2.0 as expected. I am also seeing metric.log files being shown on my cloud instance. But none of the other logs I have pushing from this server are showing up. When I check the Splunk app CMC, it appears that the update has taken and is now showing in compliance. I am not sure what I am doing wrong, or what logs you might need to help further figure out where the issue is. I only have about 6 months of Splunk experience so forgive me if this is a silly question.
I want to get the date when the Splunk admin credential got changed, is there any way to get it?
Howto to explode 1 row to several breaking out a multi-value field. app=ABC client=AA views=View1,View2 app=ABC client=AA views=View1,View2,View3 app=ABC client=BB views=View1,View3 app=ABC clien... See more...
Howto to explode 1 row to several breaking out a multi-value field. app=ABC client=AA views=View1,View2 app=ABC client=AA views=View1,View2,View3 app=ABC client=BB views=View1,View3 app=ABC client=CC views=View3,View2,View1 I want to table that to column data: app client view ABC AA View1 ABC AA View2 ABC AA View1 ABC AA View2 ABC AA View3 ABC BB View1 ABC BB View3 ABC CC View3 ABC CC View2 ABC CC View1 So that I can run count on that resultant rows app client view count ABC AA View1 2 ABC AA View2 2 ABC AA View3 1 ABC BB View1 1 ABC BB View3 1 ABC CC View3 1 ABC CC View2 1 ABC CC View1 1
We migrated our Splunk indexer from Ubuntu to RHEL recently. Everything appeared to go fine except for this one add-on. Initially, we were getting a different error. I ran fapolicyd-cli add file splu... See more...
We migrated our Splunk indexer from Ubuntu to RHEL recently. Everything appeared to go fine except for this one add-on. Initially, we were getting a different error. I ran fapolicyd-cli add file splunk to it and that error cleared but now we get this error.  External search command "ldapgroup" returned error code 1. Script output = "error message=HTTPError at "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 1245 : HTTP 403 Forbidden - insufficient permission to access this resources." I went in and did chown -R on the folder (and every other folder in the line including /opt/splunk) but that didn't fix it. The files and folders are all owned by splunk and have permission to run it. I have verified the firewall ports for 636 and 389 are open. We have tried to reinstall the add-on through the web interface and get a series of similar errors indicating that it can't copy a number of .py files over. Some do get copied though and most of the folders created. I'm at a bit of a loss...  
Hello,   I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to... See more...
Hello,   I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to compare the number of requests received by our Web Proxy with the same period in the last week. Then I want to filter out any increase lower than X percent.   This is how I've tried to implement it using the timewrap and it's pretty close to what I want to achieve. Only problem is that the timewrap command only seems to work fine if I only group by _time.     | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time span=10m | timewrap 1w | where _time >= relative_time(now(), "-60m") | where (event_count_latest_week - event_count_1week_before) > 0 | where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40     This gives me a result like this. _time event_count_1week_before_week event_count_latest_week XXXX YYYY ZZZZ     If I try to do something similar but grouping by the name of the web site that it's being accesed in the tstats command then timewrap command doesn't work for me anymore. It outputs just the latest values of 1 of the web sites.       | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m | timewrap 1w | where _time >= relative_time(now(), "-60m") | where (event_count_latest_week - event_count_1week_before) > 0 | where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40       That doesn't work. Do you know why that happens and how can I achieve what I want?     Many thanks.   Kind regards.