All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you Rick, exactly what i was looking for..  can i give you another scenario - just guide please i have a field in the same index i dont have to show it in the table but i have to use a case st... See more...
Thank you Rick, exactly what i was looking for..  can i give you another scenario - just guide please i have a field in the same index i dont have to show it in the table but i have to use a case statement to sum or count the number of transactions status_code this will have values like 200, 201, 300, 302, 400,401, 500,502 i only need the count of events for  all 200  all 400 all 500 only  (dont need the one for 300) trying to get this into case statement
You can use the strptime and strftime functions to do that. | eval date=strftime(strptime(<<someField>>, "%a %d %b %Y %H:%M:%S:%3N %Z"), "%m/%d/%Y") where <<someField>> is the name of the field con... See more...
You can use the strptime and strftime functions to do that. | eval date=strftime(strptime(<<someField>>, "%a %d %b %Y %H:%M:%S:%3N %Z"), "%m/%d/%Y") where <<someField>> is the name of the field containing the date value shown.  
Hi ,  I have this scenario where i am getting data from one of the index with 2 other specified filters like index=index_logs_App989 customer="*ABC*" org in ("Provider1","Provider2") i have one f... See more...
Hi ,  I have this scenario where i am getting data from one of the index with 2 other specified filters like index=index_logs_App989 customer="*ABC*" org in ("Provider1","Provider2") i have one filed with the date values as below Tue 27 May 2025 15:26:23:702 EDT  - from this i have to take out the time part and convert it into date like 05/27/2025  - so that i can use this to aggregate at the date or day only ... any guidance please      
Hello, colleagues. After upgrading Splunk Stream 8.1.5 stopped parsing bytes_in, bytes_out, packets_in, packets_out, they are always equal to zero...  { [-] app_tag: PANA-L7-PEN : xxxxxxxxxxxxx b... See more...
Hello, colleagues. After upgrading Splunk Stream 8.1.5 stopped parsing bytes_in, bytes_out, packets_in, packets_out, they are always equal to zero...  { [-] app_tag: PANA-L7-PEN : xxxxxxxxxxxxx bytes_in: 0 bytes_out: 0 dest_ip: x.x.x.x dest_port: xxx endtime: 2025-05-28T15:01:26Z event_name: netFlowData exporter_ip: x.x.x.x exporter_time: 2025-May-28 15:01:26 exporter_uptime: 3148584010 flow_end_reason: 3 flow_end_rel: 0 flow_start_rel: 0 fwd_status: xx input_snmpidx: xx netflow_elements: [ [+] ] netflow_version: 9 observation_domain_id: 1 output_snmpidx: xxx packets_in: 0 packets_out: 0 protoid: 6 selector_id: 0 seqnumber: 2278842767 src_ip: x.x.x.x src_port: 9997 timestamp: 2025-05-28T15:01:26Z tos: 0 } I am using an independent streamforwarder with streamfwd installed as a service on linux ubuntu 22.04.5 If I stop the service and replace the streamfwd file with the old version 8.1.3 and start the service again, everything is ok Anybody run into this?  Thanks!
I’m not sure if this helps, but changes made on the search head cluster captain do appear in the bundle under /opt/splunk/var/run.
It seems that you are not using {0} in your query input. Also can you post the sanitized code for the code block and the full entry for the data path of the 0 input?
As the error message describes, you are trying to delete a playbook from a read-only repository. If you are importing it directly from the Splunk security content github repo, then you cannot delete ... See more...
As the error message describes, you are trying to delete a playbook from a read-only repository. If you are importing it directly from the Splunk security content github repo, then you cannot delete the playbook and would be better off removing the repo in your Source Control settings. If it is cloned to a repo you control, then you need to uncheck the "read only" setting for that repo.
Hello @gcusello , If you mean with multiple values from Lookup, I didnt tried. I would like to check the time from lookup with index timestamp events of deviation +0.5Sec or -0.5Sec from the ti... See more...
Hello @gcusello , If you mean with multiple values from Lookup, I didnt tried. I would like to check the time from lookup with index timestamp events of deviation +0.5Sec or -0.5Sec from the time in index and i need to show the result. Please let me know if there are any other way to do it. Thanks!
Hi @smanojkumar , without the static value, does it run? Ciao. Giuseppe
Hi @thanh_on , you can know this viewing the license consuption for each day, that's the total indexeing volume of all the day in all the Indexers of the Cluster. Ciao. Giuseppe
We had the same issue after an update, but the solution resolved it. Thank you!
Hello @gcusello , I was just testing with single value, Obviously it will be dynamic. Thanks again!
Hi @thanh_on  The "Daily Data Volume" in this case is the amount of daily ingest.  You can get this by going to https://yourSplunkInstance/en-US/manager/system/licensing Or by running the followin... See more...
Hi @thanh_on  The "Daily Data Volume" in this case is the amount of daily ingest.  You can get this by going to https://yourSplunkInstance/en-US/manager/system/licensing Or by running the following search: index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Okay @mchoudhary - this might look a little bizarre but stay with me....you could use the following table output, this uses a search to determine the months returned based on the earliest/latest set ... See more...
Okay @mchoudhary - this might look a little bizarre but stay with me....you could use the following table output, this uses a search to determine the months returned based on the earliest/latest set by the time picker and lists them out as per the screenshot below. Would this work for you? | table Source [| makeresults count=12 | streamstats count as month_offset | addinfo | eval start_epoch=info_min_time, end_epoch=info_max_time | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]     |tstats count where index=main by _time span=1d | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | eval Source="Email" | eval Blocked=count | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source [| makeresults count=60 | streamstats count as month_offset | addinfo | eval start_epoch=info_min_time, end_epoch=info_max_time | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I found that only a couple of lookups were referenced in local files. I also reviewed all allowlist and denylist settings in distsearch.conf, and everything appears to be in order. Additionally, I c... See more...
I found that only a couple of lookups were referenced in local files. I also reviewed all allowlist and denylist settings in distsearch.conf, and everything appears to be in order. Additionally, I compared the server.conf files in both environments using btool. There were no significant differences aside from some expected variations in names and IDs. The only notable difference—though I doesn't seem to be relevant—is that captain_is_adhoc_searchhead is set to true in production.
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://sp... See more...
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://splunk-sizing.soclib.net/ My concern on this website is how to calculate "Daily Data Volume" (average uncompressed raw data). So, how can I calculate this ? Can I use a SPL command on Search Head to calculate this ? Thanks & best regards.
Hi @smanojkumar , let me understand: why do you fix _time to a fixed value? Ciao. Giuseppe
@PrewinThomas , I have reviewed the app configuration and logs but could not find any errors related to the issue. The application includes a passwords.conf file, which is replicated immediately acr... See more...
@PrewinThomas , I have reviewed the app configuration and logs but could not find any errors related to the issue. The application includes a passwords.conf file, which is replicated immediately across the SHC nodes. I wasn't able to find any exclusions regarding the custom file. 
Hello @gcusello & @PickleRick , Thanks for your time! I have converted both index time and Lookup time to epoch. It seems perfect with makeresults but while I'm using index data the time stamps... See more...
Hello @gcusello & @PickleRick , Thanks for your time! I have converted both index time and Lookup time to epoch. It seems perfect with makeresults but while I'm using index data the time stamps were changed to an another new value in the search, I herewith attched the snap of makeresults where its workign fine, and the other snaps as well,   Please let me know if i missed anything. Thanks!  
Thanks I'll follow up with them and see if they can help - should this App be de-listed from Splunkbase or can we get someone to check with the Devs and see if they might publish an updated versio... See more...
Thanks I'll follow up with them and see if they can help - should this App be de-listed from Splunkbase or can we get someone to check with the Devs and see if they might publish an updated version at a later stage? Not sure if someone from Splunk regularly checks the Apps to see if they're still available...