All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As @ITWhisperer you only have access to the first result of the table in the <done> clause, but assuming you only have a single result then you can set the token based on that very simply using <eval... See more...
As @ITWhisperer you only have access to the first result of the table in the <done> clause, but assuming you only have a single result then you can set the token based on that very simply using <eval> <done> <eval token="tok_runtime">if($result.has_runtime$="Yes", "true", null())</eval> </done> If you have multiple results, then this would work <search> <query> index="abc" sourcetype="abc" Info.Title="$Title$" |spath output=Runtime_data path=Info.runtime_data | eval has_runtime = if(isnotnull(Runtime_data), 1, 0) | table _time, has_runtime | eventstats max(has_runtime) as has_runtime </query> <done> <eval token="tok_runtime">if($result.has_runtime$>0, "true", null())</eval> </done> </search>
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on ... See more...
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on an EC2 instance in the particular AWS environment. I've read a lot of documents, tried different things, followed a couple of videos, and I'm able to see cloudwatch configuration entries in my main index, but so far have not gotten any cloudwatch logs. I am not interested in deep architectural understanding.  I just want to start from the very beginning at the true step one, and end at the last step with logs showing up in my main index. Also, the community "ask a question" page requires an "associated Apps" and I picked one from the available list, but I don't care which app works, I just want to use the one that works. Thank you very much in advance.
1. The transfer time is governed by two factors: 1) the speed of the network; and 2) the maxKBps setting in limits.conf.  The latter defaults to 256KBps (approximately), but setting it zero disables ... See more...
1. The transfer time is governed by two factors: 1) the speed of the network; and 2) the maxKBps setting in limits.conf.  The latter defaults to 256KBps (approximately), but setting it zero disables the limit and makes the network the limiting factor. 2. The EPS rate is the data transmission rate divided by the size of the events.  Both of those numbers are unknown in this thread so EPS cannot be calculated.
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we ... See more...
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we have data for Login but no data for other applications so it automatically filled zero values,  but for rejectedcount, trmpcount, topiccount there is no data for any application  0 value is not getting filled up. Application incomingcount rejectedcount invalidcount topcount trmpcount topiccount Login 1   2 5     Success 8   0 2     Error 0   0 10     logout 2   0 4     Debug 0   0 22     error-state 0   0 45     normal-state 0   0 24      
hello @AaronJaques @titchynz , I have posted solution that should resolve the error you have mentioned. Please check the following link  Stack Overflow - Splunk DB Connect and Snowflake Integrati... See more...
hello @AaronJaques @titchynz , I have posted solution that should resolve the error you have mentioned. Please check the following link  Stack Overflow - Splunk DB Connect and Snowflake Integration Error
Hi @MattKr, Here's an option that will run from the UI. | rest /services/data/indexes splunk_server=local | stats count by title | rename title as index | map [| metadata type=sourcetypes index=$i... See more...
Hi @MattKr, Here's an option that will run from the UI. | rest /services/data/indexes splunk_server=local | stats count by title | rename title as index | map [| metadata type=sourcetypes index=$index$ | eval index="$index$"] maxsearches=100 In the first line, make sure splunk_server=<NAME OF INDEXER>,  for Splunk Cloud local is fine. Make the maxsearches=XXX match the total number of indexes you have.  This uses the metadata command to get the sourcetypes, and earliest/latest times, and the number of matching events.  The one drawback is that the index isn't included in the results, so I've set it up via the map command so it will run the metadata search for each index. Couple of things to note: This will run as many searches as you have indexes - so be careful. The metadata search is lightening fast as it only runs on the index metadata (hence the name) so there's no real data being brought back - just data about the index. You need to run it as an all-time search to get all of your data... Pick a time to do this to reduce any impact. I ran the search on a small cloud environment with 52 indexes over all time and it completed in 4.9s.  Give that a go.    
| eval startTS=strptime(startTS, "%F %T.%3N%z") | sort by startTS, thirdPartyId | fieldformat startTS=strftime(startTS, "%F %T.%3N") | streamstats range(startTS) as difference window=2 global=f by ha... See more...
| eval startTS=strptime(startTS, "%F %T.%3N%z") | sort by startTS, thirdPartyId | fieldformat startTS=strftime(startTS, "%F %T.%3N") | streamstats range(startTS) as difference window=2 global=f by hashCode thirdPartyId
Also looking for this
@svukov please have a look at transaction command after sort if needed on time  here https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction ? | transaction thirdPartyId,ha... See more...
@svukov please have a look at transaction command after sort if needed on time  here https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction ? | transaction thirdPartyId,hashcode maxspan=?
@MattKr What is your retention period of logs? Also you can look at _internal logs by sourcetype to get the required data but these internal logs are stored only for 30days by default.
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted... See more...
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted by each combination of thirdPartyId and hashcode and then compare events line by line to see if the previous line and current happened within 100ms. How should the query look like? | makeresults format=csv data="startTS,thirdPartyId,hashCode,accountNumber 2024-04-16 21:53:02.455-04:00,AAAAAAAA,00000001,11111111 2024-04-16 21:53:02.550-04:00,AAAAAAAA,00000001,11112222 2024-04-16 21:53:02.650-04:00,BBBBBBBB,00001230,22222222 2024-04-16 21:53:02.650-04:00,CCCCCCCC,00000002,12121212 2024-04-16 21:53:02.730-04:00,DDDDDDDD,00000005,33333333 2024-04-16 21:53:02.830-04:00,DDDDDDDD,00000005,33334444 2024-04-16 21:53:02.670-04:00,BBBBBBBB,00000002,12121212 2024-04-16 21:53:02.700-04:00,CCCCCCCC,00000002,21212121" | sort by startTS, thirdPartyId
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 o... See more...
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 of them will fail.  Here is how it looks in the audit logs. 04-22-2024 21:30:31.964 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.964, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:31.986 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.986, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.384 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.384, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.395 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.395, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.687 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.687, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.694 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.694, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.803 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.803, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.815 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.815, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.526 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.526, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.542 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.542, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:55.317 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:55.317, user=splunk-system-user, action=Remote token requested, info=failed] My problem is I can't do much more with this information.  I have no notion of where these requests are coming from since no other information is included here.  Is there anything else I can investigate?  The number 11 doesn't seem to line up with anything I can think of either, there are 3 searchheads, 3 indexers, 1 cluster manager, in this particular deployment.  Not sure where the 11 requests is coming from.
Would it work to replace the Universal Forwarder with a new one?
We setup two cluster managers with load balancer, according to this document.  According to the document, The active manager should respond with 200 for health check probe while the standby managers... See more...
We setup two cluster managers with load balancer, according to this document.  According to the document, The active manager should respond with 200 for health check probe while the standby managers respond with 503. However, in our case, both responds with 200. In addition, when running the following command on each cluster manager, instead of listing all cluster managers, only the current node is output. What could be the problem in the setup?   splunk cluster-manager-redundancy -show-status  
Is there any script or playbook that can help in adding executible permissions for particular scripts for nix app?
Hi, the size of my Splunk database is at around >1TB+. I would like to know about all available Indexes and especially all of the associated SourceTypes and the amount of it. The search in WebUI ... See more...
Hi, the size of my Splunk database is at around >1TB+. I would like to know about all available Indexes and especially all of the associated SourceTypes and the amount of it. The search in WebUI works no problem for the last 24hrs but searching for all of the data takes forever and times out. I'm aware that saved searches would be an option but i'm curious to know if  a script would work which recursive scans the database and process all SourceTypes.data file like < /opt/splunk/var/lib/splunk/sampledb/db/db_1680195600_1672423200_0/SourceTypes.data < /opt/splunk/var/lib/splunk/sampledb/db/db_1698782400_1680199200_1/SourceTypes.data ... ... Would this be a feasable option? Many thanks
Is there a way just to exclude specific sources form the transforms null-route?
@deepakc Sorry, I missed to mention, my monitor is: [monitor://D:\Logs\*] sourcetype = abc index = def   and the transforms is set to: REGEX=(Info|info|Information|debug|Debug|Verbose) DEST_KE... See more...
@deepakc Sorry, I missed to mention, my monitor is: [monitor://D:\Logs\*] sourcetype = abc index = def   and the transforms is set to: REGEX=(Info|info|Information|debug|Debug|Verbose) DEST_KEY = queue FORMAT = nullQueue   And, my //D:\Logs\jkl.txt have all info logs and therefore does not ingest currently because of the transform but now I want to ingest this file but removing the transforms will ingest info logs from other sources as well which I dont want. How can I proceed?   But now I want to ingest the 
The Splunk Visual Exporter notifications generated by the Upgrade Readiness App are false positives, meaning, it is safe to dismiss the app from the scanning process. I confirmed this with our Splunk... See more...
The Splunk Visual Exporter notifications generated by the Upgrade Readiness App are false positives, meaning, it is safe to dismiss the app from the scanning process. I confirmed this with our Splunk Support team, apologies for the inconvenience.
I realize this is an old thread but in case anyone is running into this, this is how I solve it: Do a running read of splunkd.log while searching for "while reading" tail -f /opt/splunk/var/log/spl... See more...
I realize this is an old thread but in case anyone is running into this, this is how I solve it: Do a running read of splunkd.log while searching for "while reading" tail -f /opt/splunk/var/log/splunk/splunkd.log | grep -i "while reading" Stop splunk and keep looking at the output of the tail command. Whichever file splunk was reading while it was shutdown, is your trouble file.