All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all. Hoping someone can point me in the right direction for a very annoying persistent issue. DBX points to an MS SQL server cluster using a DNS host name in the connection string. Normally it ... See more...
Hi all. Hoping someone can point me in the right direction for a very annoying persistent issue. DBX points to an MS SQL server cluster using a DNS host name in the connection string. Normally it works pretty well except for when there is a site change and a bunch of scheduled queries time out. The issue can be replicated in SQL Explorer but the timeouts can be quite inconsistent. Tests on the search head server show that the host name is being resolved correctly and the server can connect to all resolved IPs. I did note the order of the resolved IPs is somewhat unpredictable and the IP for an offline node may be returned first. My running theory is DBConnect is attempting to connect to the first resolved IP then failing with timeout before it's able to connect to one of the other IPs. Is this expected behaviour? Is there any way to reduce the connection timeout value? I have found a number of posts on these boards relating to similar problems but they don't quite apply here.
I'm new to Splunk, so apologies if this is a silly question. I have a log file that reads:     2023-03-22 00:57:09,517 INFO TestScript - Generating reports with date of 20230321 and thread p... See more...
I'm new to Splunk, so apologies if this is a silly question. I have a log file that reads:     2023-03-22 00:57:09,517 INFO TestScript - Generating reports with date of 20230321 and thread pool size of 5 ... ... 2023-03-22 00:59:23,681 INFO MultiTestScript - Multi Test report generation completed successfully!       and I am trying to extract the elapsed time between these two events. If I try this search     <search terms> | transaction startswith="Generating reports" endswith="report generation completed"       I get no results found.   If I search for the two halves of the transaction separately, i.e.     <search terms> | transaction startswith="Generating reports"     and     <search terms> | transaction endswith="report generation completed"     the search returns the appropriate part of the log file. As soon I combine the startswith= and endswith= fields in a single search, however, I get no results.   This query works properly with another log file. The only difference I can see between the files is that the file that works contains multiple transactions (i.e. "Generating report"/"report generation completed" pairs) while the files that won't work contain only one.  
We have some MS dns logs we want to ingest and we want to clean up some of the text before processing.   Essentially the fielddata is coming in as (10)somedomain(3)diy(8)whatever(3)com(0) and we ... See more...
We have some MS dns logs we want to ingest and we want to clean up some of the text before processing.   Essentially the fielddata is coming in as (10)somedomain(3)diy(8)whatever(3)com(0) and we want to only show as somedomain.diy.whatever.com   I have the first part I think, and using the search as a test of course...   | rex field=query mode=sed "s/\(.*?\)/./g" Which leaves me with .somedomain.diy.whatever.com. I can't seem to find a way to get rid of the leading and trailing .'s  Is there away to do it in all one line?  Bear with me here, this is new territory for me.   Thanks for your help
Historical license usage is not showing some days' graph, but the data are all there.  The search string is this, never changed, it worked well before. (index=_internal host=xxxxxx source=*l... See more...
Historical license usage is not showing some days' graph, but the data are all there.  The search string is this, never changed, it worked well before. (index=_internal host=xxxxxx source=*license_usage.log* type="RolloverSummary" earliest=-30d@d) | eval _time=('_time' - 43200) | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | join type=outer _time [| search (index=_internal host=xxxxxxx source=*license_usage.log* type="RolloverSummary" earliest=-30d@d) | eval _time=('_time' - 43200) | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3) ]
Hi experts, Has anyone had any experience to use "Python for Scientific Computing" to classify Japanese text? is the app made to work with Japanese language? Thank you for your sharing in advance!
Hi. Subject is confusing so here goes. I have 3 log lines: org=A Status=Success org=A Status=Fail org=B Status=Success   I would like to get stats for orgs that have Status=Success, but not... See more...
Hi. Subject is confusing so here goes. I have 3 log lines: org=A Status=Success org=A Status=Fail org=B Status=Success   I would like to get stats for orgs that have Status=Success, but not if those orgs have even one log where Status=Fail. I tried to filter in the search query (but then, I get the line where org=A Status=Success) and in a WHERE clause, with no luck. I'm trying to find the right method to do this kind of filtering.   Thanks!
I want to have a table or chart where I can see the failure % of the past 30 days, vs. today, and output the difference too. So the table should have: Success over past 30 days, failure  over pas... See more...
I want to have a table or chart where I can see the failure % of the past 30 days, vs. today, and output the difference too. So the table should have: Success over past 30 days, failure  over past 30 days, failure % over past 30 days, total over past 30 days, success today, failure today, failure % today, total today, failure % today minus failure % over past 30 days.   So far I have something like this, for only the past 30 day fields. earliest=-30days | eval status=case('httpReturnCode' == 200,"Success", 'httpReturnCode' != 200, "Invalid") | stats count(eval(status="Success")) as Success, count(eval(status="Invalid")) as Failure by loggingObject.methodName | eval Failure(%)=(Failure/(Success + Failure)) * 100 | eval Total = Success + Failure
Hello, It's possible to retrieve splunk DDSS archived data using SmartStore w/ AWS S3 in a enterprise instance ?
I have a .csv file that I have uploaded as a lookup file that works fine when I run a search.  If I ask another user to do the same search, it times out on them.  No error, it does nothing.   I have ... See more...
I have a .csv file that I have uploaded as a lookup file that works fine when I run a search.  If I ask another user to do the same search, it times out on them.  No error, it does nothing.   I have the permissions set for everyone read/write.  Should the permissions be different?
We are trying to invoke alerts from Splunk to NetCool, and wondering what the right approach would be. We came up with 3 proposals - Solution 1 : Create a script, and invoke in alert actions, and p... See more...
We are trying to invoke alerts from Splunk to NetCool, and wondering what the right approach would be. We came up with 3 proposals - Solution 1 : Create a script, and invoke in alert actions, and pass the parameters.  Solution 2 : Create a custom command, and append it to the SPL, and pass the arguments.  Solution 3: Create a custom alert action, with html form fields. (Just like Send Email/Snow) - Preferred    We also came across Splunk dev documentation at Create custom alert actions for Splunk Cloud Platform or Splunk Enterprise  Any feedback would be appreciated.  
Hi Everyone,  I am looking for idea to implement a case where subqueries  will be run based on the user choice from check box option. for ex  [sub search -1]. -  If choice match "YES" [sub search ... See more...
Hi Everyone,  I am looking for idea to implement a case where subqueries  will be run based on the user choice from check box option. for ex  [sub search -1]. -  If choice match "YES" [sub search -2] - if choice matches "NO" [sub search-3] - if choice matches "Maybe"   and combine the result and display in single panel .  wort case all option is true .
is there a way to alert an email if today's hourly stats are 25% higher than the previous week same day hourly stats?
Hi guys...    I am trying to enable drill down for only the first column in a stats table. Any suggestions on how can we do this ?  I would need rest of the columns to appear normal without any... See more...
Hi guys...    I am trying to enable drill down for only the first column in a stats table. Any suggestions on how can we do this ?  I would need rest of the columns to appear normal without any links (not clickable).   
Hello, I am attempting to start a Splunk docker container (search head) and add it as a search peer to an existing environment all in one bash script but running in an issue. I am able to run each ... See more...
Hello, I am attempting to start a Splunk docker container (search head) and add it as a search peer to an existing environment all in one bash script but running in an issue. I am able to run each of the two steps separately without a problem but am running into an issue when I attempt to combine them into one script.  I am able to build my Dockerfile and start the container successfully. I am running the below command to start a container with the name splunk_sh.     docker run -d --rm -it -p 8000:8000 --name splunk_sh dockersplunk:latest     After the container is up, I am also able to successfully add it as a search peer using the following command and script. (A copy of the search_peer.sh script is being copied to my container via Dockerfile.)     # search peer command docker exec -it splunk_sh sh /opt/splunk/bin/search_peer.sh     search_peer.sh      #!/bin/bash sudo /opt/splunk/bin/splunk add search-server https://<indexer_ip>:8089 -auth <user>:<password> -remoteUsername <user> -remotePassword <password>     Running the two above steps separately allows me to start my Splunk container and have it become a search peer. I begin to run into an issue when I try to run a script (docker_search_peer.sh) that includes both steps, starting the splunk_sh container and the search peer command.  docker_search_peer.sh     #!/bin/bash docker run -d --rm -it -p 8000:8000 --name splunk_sh dockersplunk:latest docker exec -it splunk_sh sh /opt/splunk/bin/search_peer.sh     When I run my docker_search_peer.sh script, the container is able to start but is not able to become a search peer. I get the below error:     ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment     I've disabled selinux (this was mentioned in a few different posts) but am still running into this issue. I'm not sure how I'm able to run commands/execute scripts separately but not together in one script. Any help or guidance would be much appreciated. 
Hello, I am upgrading 800 splunk universal forwarders with Red Hat Linux OS. Using this custom app When I assign this custom app to  the universal forwarders, it shuts down and does not start again.... See more...
Hello, I am upgrading 800 splunk universal forwarders with Red Hat Linux OS. Using this custom app When I assign this custom app to  the universal forwarders, it shuts down and does not start again. With version  7.2.6, it works fine. With no issue. Anything after that version, the script does not work. 03-21-2023 12:26:05.128 -0500 INFO PipelineComponent - Performing early shutdown tasks 03-21-2023 12:26:05.128 -0500 INFO loader - Shutdown HTTPDispatchThread 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - Shutting down splunkd 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin" 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_FileIntegrityChecker" 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_JustBeforeKVStore
Search 1. | inputlookup test1.csv | table ITEM1 ITEM2   Search 2. | inputlookup test2.csv | table ITEM 1 ITEM3   Conclusion. I want it to show |table ITEM1 ITEM2 ITEM3   but m... See more...
Search 1. | inputlookup test1.csv | table ITEM1 ITEM2   Search 2. | inputlookup test2.csv | table ITEM 1 ITEM3   Conclusion. I want it to show |table ITEM1 ITEM2 ITEM3   but my results are showing ITEM1 ITEM2 ITEM1 ITEM2 ITEM1               ITEM3 ITEM1               ITEM3     Question. How can I join the Item1s? so that I get a result of ITEM1 ITEM2 ITEM3
I am attempting to audit the usage of commands such as chown or chomod on my linux environment.  Through the below query I am able to see the list of user's, hosts, and the commands that were run but... See more...
I am attempting to audit the usage of commands such as chown or chomod on my linux environment.  Through the below query I am able to see the list of user's, hosts, and the commands that were run but not the files or directories that they were run on.  There are no fields in the event viewer that show filepaths or directories of any kind. index=myindex  comm="chmod"  | table date , host , AUID , comm , exe , source   Any assistance would be appreciated.  Pretty new to Splunk
Hi All, We have recently installed Enterprise Security but strangely the default dashboard doesn't display the indexes we have in our environment. Initially I though the indexes are not CIM complia... See more...
Hi All, We have recently installed Enterprise Security but strangely the default dashboard doesn't display the indexes we have in our environment. Initially I though the indexes are not CIM compliant but it wasn't the case as many of them are. Unfortunately, I am running out of ideas and need some help configuring it. Need someone who can help me with it. Thanks much,
Hello, I am attempting to replace a large unwieldy macro with a data model. Part of the macro is a rex command that finds what we call "confusable characters" that are the highbit versions of ASCII c... See more...
Hello, I am attempting to replace a large unwieldy macro with a data model. Part of the macro is a rex command that finds what we call "confusable characters" that are the highbit versions of ASCII characters, like 𝟐 or ꓜ, and replaces them with the ASCII versions (2 or Z respectively), like this: rex field=$arg1$ mode=sed "y/𝟐𝟚𝟤𝟮𝟸ꝚƧϨꙄᒿꛯ/22222222222/" The actual macro is much longer and encompasses all numbers and letters. I have been having difficultly figuring out how to incorporate this into the data model. I've been able to use a CSV lookup like this: char_search,old_char,new_char *𝟐*,𝟐,2 *ꓜ*,ꓜ,Z Make char_search a wildcard match field, and use this query: | makeresults | eval t="dfasdf𝟐𝟐" | lookup CSVconfusables char_search as t OUTPUT | eval u=replace(t,old_char,new_char) It works find with 1 character to replace, but when there are multiple to replace, the lookup output fields become multivalue and replace doesn't work: | makeresults | eval t="ꓜdfasdf𝟐𝟐" | lookup CSVconfusables char_search as t OUTPUT | eval u=replace(t,old_char,new_char) Is there any way to accomplish what the macro is doing in a data model? Thanks in advance!
Hi All, I want chart to be created in the below way. The x-axis needs to have date and time like that. the chart i am able to create is . i tried to do eval strftime to _time but n... See more...
Hi All, I want chart to be created in the below way. The x-axis needs to have date and time like that. the chart i am able to create is . i tried to do eval strftime to _time but not getting the desired result. The 1st query I tried -  index=unix (source=cpu sourcetype=cpu) OR (sourcetype=vmstat) host IN (usaws135000) | fields _time cpu_load_percent memUsedPct swapUsedPct host | timechart span=1h eval(round(avg(cpu_load_percent),2)) as CPUAvg eval(round(avg(memUsedPct),2)) as MemoryAvg eval(round(avg(swapUsedPct),2)) as SwapAvg by host useother=t limit=0 The 2nd query i tried -  index=unix (source=cpu sourcetype=cpu) OR (sourcetype=vmstat) host IN (usaws1350) | fields _time cpu_load_percent memUsedPct swapUsedPct host | bin span=1h _time | eval _time=strftime(_time,"%a %b %d %Y %H:%M:%S") | stats avg(cpu_load_percent) as CPUAvg avg(memUsedPct) as MemoryAvg avg(swapUsedPct) as SwapAvg by _time