All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, Is there any way we can calculate time duration between 2 different events like start and end. For example: we have start event at 10/10/23 23:50:00.031 PM, and End evet at 11/10/23 0... See more...
Hi Team, Is there any way we can calculate time duration between 2 different events like start and end. For example: we have start event at 10/10/23 23:50:00.031 PM, and End evet at 11/10/23 00:50:00.031 AM  how can we calculate this. please help. Thank you
Thank you for your help for this question Can you also help this related question?    Thank you so much https://community.splunk.com/t5/Splunk-Search/How-to-calculate-total-when-aggregating-using-s... See more...
Thank you for your help for this question Can you also help this related question?    Thank you so much https://community.splunk.com/t5/Splunk-Search/How-to-calculate-total-when-aggregating-using-stats-max-field/m-p/660403#M227978
How to calculate total when aggregating using stats max(field)? Thank you for your help.  Max Total Score is the total score of maximum score for each Score field when aggregating all rows using ... See more...
How to calculate total when aggregating using stats max(field)? Thank you for your help.  Max Total Score is the total score of maximum score for each Score field when aggregating all rows using stats: max(Score1), max(Score2), max(Score3).    TotalScore is the total of each Score field for each row (without aggregation)  This is the output I need Class Name Subject TotalScore Score1 Score2   Score3 Max TotalScore ClassA grouped grouped 240 85 95 80 260 My Splunk Search   | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, max(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Score2, max(Score3) as Score3 by Class | table Class Name, Subject, Total Score, Score1, Score2, Score3   I think my search below is going to display the following. Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name1 Name2 Name3 Math English 240 85 95 80 This is the whole data in table format from scoreindex Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name1 Math 170 60 40 70 ClassA Name1 English 195 85 60 50 ClassA Name2 Math 175 50 60 65 ClassA Name2 English 240 80 90 70 ClassA Name3 Math 170 40 60 70 ClassA Name3 English 230 55 95 80
at all, I have to parse logs extracted from logstash. I'm receiving logstash logs and they are in json format and almost all the fields I need are already parsed and available in json. My issue is... See more...
at all, I have to parse logs extracted from logstash. I'm receiving logstash logs and they are in json format and almost all the fields I need are already parsed and available in json. My issue is that the event rawdata is in a field called "message" and these fields aren't automatically extracted as I would. I'd like to avoid to re-parse all datasources and create custom add-ons from all data sources. Does anybody encounter this kind of integration and know a way to use standard Add-Ons to parse only the message field? Thank you for your help. Ciao. Giuseppe
Have you consulted resources like these? Using threat intelligence in Splunk Enterprise Security  Unified App for ES: Enrich and submit notable events - Splunk Intel Management (TruSTAR) 
We use the ansible-role-for-splunk framework found on GitHub: https://github.com/schneewe/ansible-role-for-splunk It support app deployments through the following task: https://github.com/schneewe/a... See more...
We use the ansible-role-for-splunk framework found on GitHub: https://github.com/schneewe/ansible-role-for-splunk It support app deployments through the following task: https://github.com/schneewe/ansible-role-for-splunk/blob/master/roles/splunk/tasks/configure_apps.yml But this seem to require a full Search Head Cluster but we only have a single search head node. Isn't the single search head setup supported by this framework or am I just missing something?
I will for sure! I thank you for the time you all have dedicated to this. When this will be done I'll share my experience with you for any further feedback. Best regards
If the app is updating itself then it should write to the bin or local directory.  Local is preferred so the changes are not overwritten when the app is updated.
Please heed the note at the top of the file. # DO NOT EDIT THIS FILE! # Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. # To make changes, copy the section/stanza ... See more...
Please heed the note at the top of the file. # DO NOT EDIT THIS FILE! # Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. # To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/apps/Splunk_TA_windows/default # into ../local and edit there. Any changes made to a default file will be lost when a new version of the app is installed.  All changes should be made in a local file.
Hi I am using the same source type on the same file. One is coming in via forwarder and the other is uploaded via GUI. However, the forwarder is not extracting the fields. This means I have to us... See more...
Hi I am using the same source type on the same file. One is coming in via forwarder and the other is uploaded via GUI. However, the forwarder is not extracting the fields. This means I have to use "patch" to access the fields, this is a pain. Below is a file from a forwarder, we can see fields are not extracted. Below is the same file but upload - in this case, the fields are extracted. This is the sourcetype [import_json_2] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = start_time TZ = Asia/Beirut category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = 1   Any ideas - thanks in advance. Rob
The app is the same, and the configuration is also common. Is there any other folder where we can put the app  ensure that the app default folder config files are loaded? (Without using etc/system/l... See more...
The app is the same, and the configuration is also common. Is there any other folder where we can put the app  ensure that the app default folder config files are loaded? (Without using etc/system/local)
Hi All  In my current dashboard i have several text input that colleagues can use to find varies information. Sometimes it takes a while for their information to appear.  Is there a way to add a lo... See more...
Hi All  In my current dashboard i have several text input that colleagues can use to find varies information. Sometimes it takes a while for their information to appear.  Is there a way to add a loading notification / alert to advise colleagues that Splunk is retrieving the information but may take some time?  The delay unusually is only for their 1st search and thereafter the searches are pretty much instant.  Many thanks   Paula  
The point of the SHC Deployer is to ensure all SHC members have the same configuration.  If there is a need for unique configurations then they will have to be done manually (or perhaps using Ansible... See more...
The point of the SHC Deployer is to ensure all SHC members have the same configuration.  If there is a need for unique configurations then they will have to be done manually (or perhaps using Ansible or something similar).
Hi @ramkyreddy, what exactly is your requirement? <your_search> | eval sku = if(name="",substr(kit,0,5),substr(name,0,5)) | eval sku=case(sku="NAC-D","ANT-P",sku="DHV-K","ABD-U",true(),sku) the se... See more...
Hi @ramkyreddy, what exactly is your requirement? <your_search> | eval sku = if(name="",substr(kit,0,5),substr(name,0,5)) | eval sku=case(sku="NAC-D","ANT-P",sku="DHV-K","ABD-U",true(),sku) the search should work.  Ciao. giuseppe
Name sku kit NAC-D-CDSK-DLS-05.90 NAC-D HJA-JEOE-DNDN-94.4.0 This my data, I want to replace  with NAC-D to ANT-P for multiple values this is my search query ... See more...
Name sku kit NAC-D-CDSK-DLS-05.90 NAC-D HJA-JEOE-DNDN-94.4.0 This my data, I want to replace  with NAC-D to ANT-P for multiple values this is my search query | eval sku = if(name=="",substr(kit,0,5),substr(name,0,5)) | eval sku=case(sku =="NAC-D","ANT-P ",sku =="DHV-K","ABD-U",true(),sku)
Hi @ApolloJ, you said that you need "bandwidth usage by uri". anyway, did my answer solve your request? If yes, please accept the answer, otherwise, tell me how can I help you. ciao. Giuseppe P... See more...
Hi @ApolloJ, you said that you need "bandwidth usage by uri". anyway, did my answer solve your request? If yes, please accept the answer, otherwise, tell me how can I help you. ciao. Giuseppe P.S.: Karma Points are appreciated
Hi gcusello, Thanks for the reply. "By URI" is not necessary for my case (as I want all of them
Hi, I have created a custom app to implement ACME on search head cluster members with a script on bin folder that update files/certificates on 3 folders ./acme ./certs ./backup the content of th... See more...
Hi, I have created a custom app to implement ACME on search head cluster members with a script on bin folder that update files/certificates on 3 folders ./acme ./certs ./backup the content of these folders are required to be different on each server (deployer and 3 members). How to correctly deploy/implement this configuration? Thanking you in advance, Graça
| appendpipe [| stats count as _count | where _count=0 | eval sum="100%"]
What if I want to show 100% in existing field. let say i have a sum field which showing 0, so i want to show 100% is there a way to do that?