All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, Please help.  I have a AIO Splunk Enterprise Ver: 8.1.2 setup running on windows server, i have done the following upgrade: 1. Upgrade from Ver: 8.1.2 to ver: 8.2.5. 2. checked kvstore al... See more...
Hi all, Please help.  I have a AIO Splunk Enterprise Ver: 8.1.2 setup running on windows server, i have done the following upgrade: 1. Upgrade from Ver: 8.1.2 to ver: 8.2.5. 2. checked kvstore already migrated to wireTiger. So no migration done on my end. 3. Upgrade from Ver: 8.2.5 to Ver: 9.0.5. After start up checked kvstore with the command "splunk show kvstore-status --verbose", as shown below is the result:   WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliverifyServerName for details. This member:                                     BackupRestoreStatus : Ready                                                                disables : 0                      featureCompaitibilityVersion : An error occured during the last operation ('getParameter', domain:  '15', code: '13053'): No Suitable servers found: 'serxxxx SelectionTimeoutMS' expired: [connection closed calling ismaster on '127.0.0.1:8191']                                                                     guid : DAxxxxxx-xxxx-xxx-xxxx-xxxxxxxxxxxx                                                                     port : 8191                                                        standalone : 1                                                                  status : failed                                                  storageEngine : wireTiger   Can anyone advise why after upgraded to Ver: 9.0.5 kvstore failed? i have tried the following but the status still showed failed: Point 2: Take KV store backup 2.1. stop Splunk. 2.2. take backup of the DB (keep it safe - may be needed in emergency): 2.3. cp -r $SPLUNK_DB/kvstore/mongo /backup 2.4. Rename mongod.lock (Location: $SPLUNK_DB/kvstore/mongod) : mongod.lock_bkp Point 3: clean up KVStore Splunk 3.1 Stop the member: ./splunk stop 3.2 Clean the member's kvstore: ./splunk clean kvstore --local 3.3 start Splunk Point 4. Check kvstore status on each node. ./splunkshow kvstore-status   Please help. Thank you
If a value matches multiple rows due to wildcard, I want a method to return only one match that is "narrowest".  Is there a way to construct lookup Filter? The use case is like the following.  Given... See more...
If a value matches multiple rows due to wildcard, I want a method to return only one match that is "narrowest".  Is there a way to construct lookup Filter? The use case is like the following.  Given a wildcard lookup (wildlookup) on matchfield matchfield field1 field2 abcdefg matchabc 7match abcdef* matchabc 6match abcde* matchabc 5match abc* matchabc broadmatch The default behavior (without lookup filter) will be 'matchfield' field1 field2 abcdefgh matchabc matchabc matchabc 6match 5match broadmatch abcd matchabc broadmatch abcde matchabc matchabc 5match broadmatch abcdef matchabc matchabc matchabc 6match 5match broadmatch Because I organized my lookup table such that the narrowest match is the first match by row, I can do   | eval field1 = mvindex(field1, 0), field2 = mvindex(field2, 0)   But then I have to do this every time I use this lookup.  Lookup filter says Filter results from the lookup table before returning data. Create this filter like you would a typical search query using Boolean expressions and/or comparison operators. Obviously mvindex is not Boolean nor a comparison. How do I set up a filter to do this?  More broadly, if there is a filter so I do not have to manually organize my lookup, it would be even better.
Hi Team, We have the current infrastructure : UF -> HF -> Indexers Now, the question here is can we set up external load balancer between UF and HF ? The reason is we got 6 HF's and 20UF's spread ... See more...
Hi Team, We have the current infrastructure : UF -> HF -> Indexers Now, the question here is can we set up external load balancer between UF and HF ? The reason is we got 6 HF's and 20UF's spread across different zones. Opening Firewall ports will be tedious task from 20 UF's to 6 HF's  as this OT environment and we do not want to expose multiple IP's due to security reasons. Regards VK
 Start out with the top error which pops up in various places on splunk security essentials Then some post talk about editing what I believe to be the xml to say soomething about dashboard versi... See more...
 Start out with the top error which pops up in various places on splunk security essentials Then some post talk about editing what I believe to be the xml to say soomething about dashboard version 1.1 but second part shows thats not  an issue at least with that one xml code also I checked another and it had same version didn't check every xml but that what I thought the regex was supposed to find but it doesn't work alas I arive here asking for little more help with one or all parts of this mostly the javacript error I also don't think its got anything to do with the fact we hadn't set splunk web to be https because we've had it that way before but got the same error. anyway any help would be great and please be specific as I am not a coder.
I am looking to dynamically update the Splunk Dashboard panel title, depending on options I've chosen from a dropdown menu in Splunk Dashboard. I've tried the following <row> <panel> <t... See more...
I am looking to dynamically update the Splunk Dashboard panel title, depending on options I've chosen from a dropdown menu in Splunk Dashboard. I've tried the following <row> <panel> <title>Panel1: (Hardware: $hardware1$ - $unit_used$</title> ... <search base="main_search"> <query> ... | eval _unit_used = "ms" </query> <done> <set token="unit_used">$result._unit_used$</set> </done> </search>     However when the script runs, I am seeing "Hardware: abc - $result.unit_used$". instead of seeing "Hardware: abc - ms"   Is there a way of achieving this? Thanks
We are trying to do custom linebreaking for different types of logs under the same sourcetype using the props below. The linebreaking in first stanza declared for the sourcetype is working fine, but... See more...
We are trying to do custom linebreaking for different types of logs under the same sourcetype using the props below. The linebreaking in first stanza declared for the sourcetype is working fine, but none of the stanzas for [souce://] are breaking the events correctly, the entire file is getting ingested as a single event. All the files under this sourcetype are coming in from the same directory, we have tried assigning priorities and deploying it on both forwarder and indexer, but it still doesn’t work. Have any of you faced a similar issue before? Can you please help us resolve this. [MY_SRCTYPE] SHOULD_LINEMERGE=false LINE_BREAKER=(\~|\r\n)ST\*834\* NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 1   [source::/mysource/ToSplunk/*.xml.*.edi] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s]+)\<Policy\>[\r\n\s]+ NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 5   [source::/mysource/ToSplunk/*.COMPARE.xml.*.edi] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s]+)\<CompareMissing\>[\r\n\s]+ NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 6   [source::/mysource/ToSplunk/*.xml.edi] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s])+\<Policy\s+ NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 7   [source::/mysource/ToSplunk/*.RCNO*.P.OUT.*] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true TRUNCATE=999999 CHARSET=UTF-8 priority = 8
Hi folks, What are reasons for my output queues to get filled????   I have my HF on azure cloud.It was working properly and sending data to the splunk cloud.Suddenly, out of the blue, the hf stoop... See more...
Hi folks, What are reasons for my output queues to get filled????   I have my HF on azure cloud.It was working properly and sending data to the splunk cloud.Suddenly, out of the blue, the hf stooped sending all logs( internal and configured ones )to my splunk cloud .I have noticed that my output queue is full and blocked data for a week after that it got fixed up on its own.Now there in no issue.   There was no difference in terms of events for that issue period.     My architecture: UFs to hf on azure to splunk cloud.     Any thoughts?
When i run the agent, it always shows me this message "Started AppDynamics Java Agent Successfully." But in logs is logs/argentoDynamicService_07-06-2023-10.47.50 file, it is showing the below error... See more...
When i run the agent, it always shows me this message "Started AppDynamics Java Agent Successfully." But in logs is logs/argentoDynamicService_07-06-2023-10.47.50 file, it is showing the below error, ErrorOccurred==> Unable to complete registration on policy file argentoPolicy.json from the Management Server https://cas********.saas.appdynamics.com/argento-agent/v1/management, Error: java.lang.Exception: ERROR from Controller Response: Invalid tenant. Note:- this error is inconsistant, sometime it got connected to the controller with the same configuration.
After upgrade from 8.2.4 to 9.0.4.1 forwarders connect to indexers then after the Indexer cluster gets stabilized. All looked good - new data are delivered and indexed  and searching works fine. How... See more...
After upgrade from 8.2.4 to 9.0.4.1 forwarders connect to indexers then after the Indexer cluster gets stabilized. All looked good - new data are delivered and indexed  and searching works fine. However, we start seeing the log messages below, WARN level messages, being populated into splunkd.log.: 05-31-2023 06:47:21.407 -0500 WARN SystemInfo [15415 TcpChannelThread] - Invalid file path /proc/1/cgroup while checking container status   During the upgrade, no new apps were added and no container is used for splunk. This kind of messages are found 3-4 times/min  by different components and also in pretty much all splunk entities, including SH, deployer, LM, indexers and CM.   We would like an analysis for that one.
Using Splunk enterprise 9.0.41. Users are reporting most page transitions between apps and dashboards are taking several seconds (10 +). Even logging on with the search app as my default app takes 30... See more...
Using Splunk enterprise 9.0.41. Users are reporting most page transitions between apps and dashboards are taking several seconds (10 +). Even logging on with the search app as my default app takes 30 seconds+. There does not appear to be any CPU/memory pinch on our single search head as we monitor this with telegraf. Looking into MS Edge dev tools I see pattern like this It's not browser or user specific.  It also happens from the server itself if we use the browser over an RDP session so not network latency from end user browser to search head either Where could I start to look for clues as to why this is occurring? 
Hi, I would like to add a subcomponent bubble to the bubble chart in the Circlepack Viz app. For example when i zoom in INFO and see all the bubbles included, i would like to do the same action i... See more...
Hi, I would like to add a subcomponent bubble to the bubble chart in the Circlepack Viz app. For example when i zoom in INFO and see all the bubbles included, i would like to do the same action in the bubble ROOT and show other bubble. How can i achieve that ? @chrisyounger   
Hello, I have a simple .bat file that just performs a "dir" command to list everything in a folder.  I have set the input.conf to the following:   [script://.\bin\ListDir.bat] disabled = 0 ## Run... See more...
Hello, I have a simple .bat file that just performs a "dir" command to list everything in a folder.  I have set the input.conf to the following:   [script://.\bin\ListDir.bat] disabled = 0 ## Run Every Ten Seconds interval = 10 sourcetype = Script:ListDir index = main   I have placed my ListDir.bat in the application's \bin\ folder. When I run the command: splunk cmd ListDir.bat, splunk runs the bat files no problem, displaying all the files I want Splunk to ingest. However, looking at my Splunkd logs, I see that ExecProcessor lists the exact location where the bat file exists and reads "The system cannot find the file specified."   ERROR ExecProcessor [11384 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_Burn_Folder\bin\ListDir.bat"" The system cannot find the file specified.   I looked at the file permissions and I gave everyone access to this bat file. Any help is appreciated.
why doesn't this search populate the multiselect 
I am trying to install "Splunk add-on for new relic" on splunkcloud instance, but i am not able to find any option to install. Can anyone share steps or document if any, that can help? Or any altern... See more...
I am trying to install "Splunk add-on for new relic" on splunkcloud instance, but i am not able to find any option to install. Can anyone share steps or document if any, that can help? Or any alternate, we need to integrate new relic data to splunk.
Hey! I have a dashboard that is updated everyday by a report that runs at 12:30 UTC. All the visualizations are displayed using the search time range "Today". However, each day before 12:30 UTC, ... See more...
Hey! I have a dashboard that is updated everyday by a report that runs at 12:30 UTC. All the visualizations are displayed using the search time range "Today". However, each day before 12:30 UTC, since the report did not run yet, all the visualizations display "no results found". Is there a way that if the report did not run yet, the dashboard displays the information from the past day? I tried to expand the search time range for the past two days, however, most of the visualizations in based in displaying the number of machines of "today" by a specific type. Expanding the time range for the past two days, it happens to sum the count from the past day with today after 12:30 UTC.   | chart count as "# of Machines" by Classification MachineClass | addtotals fieldname="Total (s)"   Does anyone have some idea or know how to do this? Thanks!
I have Splunk on v9.0.1 and ES on v7.0.1, the issue am facing for the notable alerts is that some of the alerts have trigger time different than the notable time and the difference is vast in some ca... See more...
I have Splunk on v9.0.1 and ES on v7.0.1, the issue am facing for the notable alerts is that some of the alerts have trigger time different than the notable time and the difference is vast in some cases, note that i've already checked timezone issues, there's none, all endpoints are in one same timezone. This is messing up my SLA also it doesn't show up until few hours in the incident review tab, for e.g. the alert triggered at 3 PM,  but it won't show up until certain time and will have the notable time different, attaching snaps for reference, notice the time difference.    
Hai All, we have some data coming from splunk DB connect and one field has RAW data as below  how to convert the  json payload data into readable format as i have attached pic how it should convert... See more...
Hai All, we have some data coming from splunk DB connect and one field has RAW data as below  how to convert the  json payload data into readable format as i have attached pic how it should convert and below is the json data  The field we want to perform json operations on is report_json tried with below search but not working and is anything we need to update in the DB query end to get the output index="test1"  | search NOT errors="*warning Puppet*" NOT errors="*Permission*" report_json=* | eval json_string=json(report_json), test=report_json | table json_string, test, len(test)
Hi All,   we are getting error  "unable_to_write_batchjava.net.SocketTimeoutException: Read timed out" in Splunk DBconnect . == [Scheduled-Job-Executor-0] ERROR c.s.d.s.task.listeners.RecordWrite... See more...
Hi All,   we are getting error  "unable_to_write_batchjava.net.SocketTimeoutException: Read timed out" in Splunk DBconnect . == [Scheduled-Job-Executor-0] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.net.SocketTimeoutException: Read timed out at java.base/java.net.SocketInputStream.socketRead0(Native Method) at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:478) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:472) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1454) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1065) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at com.codahale.metrics.httpclient.InstrumentedHttpRequestExecutor.execute(InstrumentedHtt ==   Please suggest.
Hello, I have this dashboard with this 3 fields (ID, A1_Links, A2_Links). The goal is to have the count of the total of ID's containing links, based on the A1 and A2 Links columns. (How many ID... See more...
Hello, I have this dashboard with this 3 fields (ID, A1_Links, A2_Links). The goal is to have the count of the total of ID's containing links, based on the A1 and A2 Links columns. (How many ID's containing A1 links and how many ID's containing A2 links) How can I do that?   index="" host= sourcetype=csv source=CW27.csv | dedup ID | table ID A1_Links A2_Links