All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@MattKr What is your retention period of logs? Also you can look at _internal logs by sourcetype to get the required data but these internal logs are stored only for 30days by default.
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted... See more...
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted by each combination of thirdPartyId and hashcode and then compare events line by line to see if the previous line and current happened within 100ms. How should the query look like? | makeresults format=csv data="startTS,thirdPartyId,hashCode,accountNumber 2024-04-16 21:53:02.455-04:00,AAAAAAAA,00000001,11111111 2024-04-16 21:53:02.550-04:00,AAAAAAAA,00000001,11112222 2024-04-16 21:53:02.650-04:00,BBBBBBBB,00001230,22222222 2024-04-16 21:53:02.650-04:00,CCCCCCCC,00000002,12121212 2024-04-16 21:53:02.730-04:00,DDDDDDDD,00000005,33333333 2024-04-16 21:53:02.830-04:00,DDDDDDDD,00000005,33334444 2024-04-16 21:53:02.670-04:00,BBBBBBBB,00000002,12121212 2024-04-16 21:53:02.700-04:00,CCCCCCCC,00000002,21212121" | sort by startTS, thirdPartyId
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 o... See more...
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 of them will fail.  Here is how it looks in the audit logs. 04-22-2024 21:30:31.964 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.964, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:31.986 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.986, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.384 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.384, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.395 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.395, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.687 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.687, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.694 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.694, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.803 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.803, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.815 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.815, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.526 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.526, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.542 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.542, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:55.317 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:55.317, user=splunk-system-user, action=Remote token requested, info=failed] My problem is I can't do much more with this information.  I have no notion of where these requests are coming from since no other information is included here.  Is there anything else I can investigate?  The number 11 doesn't seem to line up with anything I can think of either, there are 3 searchheads, 3 indexers, 1 cluster manager, in this particular deployment.  Not sure where the 11 requests is coming from.
Would it work to replace the Universal Forwarder with a new one?
We setup two cluster managers with load balancer, according to this document.  According to the document, The active manager should respond with 200 for health check probe while the standby managers... See more...
We setup two cluster managers with load balancer, according to this document.  According to the document, The active manager should respond with 200 for health check probe while the standby managers respond with 503. However, in our case, both responds with 200. In addition, when running the following command on each cluster manager, instead of listing all cluster managers, only the current node is output. What could be the problem in the setup?   splunk cluster-manager-redundancy -show-status  
Is there any script or playbook that can help in adding executible permissions for particular scripts for nix app?
Hi, the size of my Splunk database is at around >1TB+. I would like to know about all available Indexes and especially all of the associated SourceTypes and the amount of it. The search in WebUI ... See more...
Hi, the size of my Splunk database is at around >1TB+. I would like to know about all available Indexes and especially all of the associated SourceTypes and the amount of it. The search in WebUI works no problem for the last 24hrs but searching for all of the data takes forever and times out. I'm aware that saved searches would be an option but i'm curious to know if  a script would work which recursive scans the database and process all SourceTypes.data file like < /opt/splunk/var/lib/splunk/sampledb/db/db_1680195600_1672423200_0/SourceTypes.data < /opt/splunk/var/lib/splunk/sampledb/db/db_1698782400_1680199200_1/SourceTypes.data ... ... Would this be a feasable option? Many thanks
Is there a way just to exclude specific sources form the transforms null-route?
@deepakc Sorry, I missed to mention, my monitor is: [monitor://D:\Logs\*] sourcetype = abc index = def   and the transforms is set to: REGEX=(Info|info|Information|debug|Debug|Verbose) DEST_KE... See more...
@deepakc Sorry, I missed to mention, my monitor is: [monitor://D:\Logs\*] sourcetype = abc index = def   and the transforms is set to: REGEX=(Info|info|Information|debug|Debug|Verbose) DEST_KEY = queue FORMAT = nullQueue   And, my //D:\Logs\jkl.txt have all info logs and therefore does not ingest currently because of the transform but now I want to ingest this file but removing the transforms will ingest info logs from other sources as well which I dont want. How can I proceed?   But now I want to ingest the 
The Splunk Visual Exporter notifications generated by the Upgrade Readiness App are false positives, meaning, it is safe to dismiss the app from the scanning process. I confirmed this with our Splunk... See more...
The Splunk Visual Exporter notifications generated by the Upgrade Readiness App are false positives, meaning, it is safe to dismiss the app from the scanning process. I confirmed this with our Splunk Support team, apologies for the inconvenience.
I realize this is an old thread but in case anyone is running into this, this is how I solve it: Do a running read of splunkd.log while searching for "while reading" tail -f /opt/splunk/var/log/spl... See more...
I realize this is an old thread but in case anyone is running into this, this is how I solve it: Do a running read of splunkd.log while searching for "while reading" tail -f /opt/splunk/var/log/splunk/splunkd.log | grep -i "while reading" Stop splunk and keep looking at the output of the tail command. Whichever file splunk was reading while it was shutdown, is your trouble file.
Unfortunately not. The configure_db_maintenance settings do not have that level of granularity. It applies to all containers.
You could try next by your own responsibility! Anyhow your SHC is not fulfilling Splunk’s requirements! Have you try to stop all nodes on SHC? Backup kvstore. Then remove that app from this one no... See more...
You could try next by your own responsibility! Anyhow your SHC is not fulfilling Splunk’s requirements! Have you try to stop all nodes on SHC? Backup kvstore. Then remove that app from this one node by rm -fr …./etc/apps/<your app nam>. Then start the all nodes one by one on the SHC and check what is your situation after that? Also check kvstore and shcluster statuses. That may help you, or it could lead the situation which force you to install whole SHC from scratch! So test this with your own risk.
Have a look at this one - maybe this would be of help, it mentions Azure App Insights  (It's always worth perusing on Splunk base and working through the Azure TA's as to your requirements). The hel... See more...
Have a look at this one - maybe this would be of help, it mentions Azure App Insights  (It's always worth perusing on Splunk base and working through the Azure TA's as to your requirements). The help file is within the TA so you would need to look at that for further help https://splunkbase.splunk.com/app/7246  
This is an example using makeresults command - you can use the rex command to extract key values from the  content.payload field  Example only to show you how to extract some of fields I have call... See more...
This is an example using makeresults command - you can use the rex command to extract key values from the  content.payload field  Example only to show you how to extract some of fields I have called my field data replace this with yours . | makeresults | eval data = "fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=RequestID: 101524, GL Monthly Rates - Validate and upload program" | rex field=data "fileName=(?<fileName>\w+\.\w+),\speriodName=(?<periodName>\w+),\sstatus=(?<status>\w+)" | table * Or look at the spath command  that may also be another way
There is no out-of-the-box query for that.  You have to combine a query that returns all saved searches with a query that pulls run times from the logs.  These searches should get you started. | res... See more...
There is no out-of-the-box query for that.  You have to combine a query that returns all saved searches with a query that pulls run times from the logs.  These searches should get you started. | rest /servicesNS/-/-/saved/searches index=_internal source=*scheduler.log component=SavedSplunker status=success  
I tried below code but it not working. can any one let me know what is wrong here: <form version="1.1" theme="light"> <label>HTMD Dashboard</label> <fieldset submitButton="false"> <input type="time"... See more...
I tried below code but it not working. can any one let me know what is wrong here: <form version="1.1" theme="light"> <label>HTMD Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="host"> <label>Env wise hosts</label> <choice value="appdev1host","logdev1host","cordev1host">DEV1</choice> <choice value="appdev2host","logdev2host","cordev2host">DEV2</choice> <choice value="appdev3host","logdev3host","cordev3host">DEV3</choice> <choice value="appdev4host","logdev4host","cordev4host">DEV4</choice> <choice value="appsit1host","logsit1host","corsit1host">SIT1</choice> <choice value="appsit2host","logsit2host","corsit2host">SIT2</choice> <choice value="appsit3host","logsit3host","corsit3host">SIT3</choice> <choice value="appsit4host","logsit4host","corsit4host">SIT4</choice> </fieldset> <row> <panel> <table> <title>Incoming Count &amp; Total Count</title> <search> <query>index=test-index source=application.logs $host$ "Incoming count" |stats count by "Incoming count" |appendcols index=test-index source=application.logs $host$ "Total count" |stats count by "Total count" |table "Incoming count" "Total count" </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>
I don’t think you can. Setting null routing should come first in your props and transforms (Left to right order) otherwise all the data will get discarded, so look at the order of your props, I'm s... See more...
I don’t think you can. Setting null routing should come first in your props and transforms (Left to right order) otherwise all the data will get discarded, so look at the order of your props, I'm sure the null is first order which defines the jkl.txt logs.   What you want to do now is to explicitly add the jkl.txt for ingest, so the method would be to whitelist only the files you want to be logged as in the example below.   [monitor://D:\Logs\*] sourcetype = abc index = def whitelist=(*jkl.txt|*myother_files.txt)   So, me thinks you may have to modify the null routing  or disable it.
Can you dynamically change the charts (ie. from bar to line), using a dropdown menu? At the moment, I've created multiple charts and utilizing show and hide (depending on the options selected), to s... See more...
Can you dynamically change the charts (ie. from bar to line), using a dropdown menu? At the moment, I've created multiple charts and utilizing show and hide (depending on the options selected), to serve this purpose.   I was wondering if there's an easier/cleaner/simpler way of achieving this.
Hi All, I have a field called content.payload and the value is like .How to extract these values {fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=Reque... See more...
Hi All, I have a field called content.payload and the value is like .How to extract these values {fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=RequestID: 101524, GL Monthly Rates - Validate and upload program}