All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi people, There was a good answer provided to part of this question here: Solved: Re: How to display a list of fields for an index? - Splunk Community Taking this further, how would I join the ind... See more...
Hi people, There was a good answer provided to part of this question here: Solved: Re: How to display a list of fields for an index? - Splunk Community Taking this further, how would I join the index and sourcetype pair for each field name so I would end up with something like this: someIndex.someSourcetype.someFieldname index=firewall sourcetype=firewall1 fieldnames: host, source, srcip, dest, etc etc. firewall.firewall1.srcip firewall.firewall1.dest firewall.firewall1.destport .... index=networkdevices sourcetype=ids1 (sourcetype=ids2...) networkdevices.ids1.src networkdevices.ids2.dest ... networkdevices.router1.src .... index=someApp sourcetype=someTCPsource someApp.someTCPsource.src someApp.someTCPsource.randomField1 ....   Or, alternately, could I take the results of this query and run some modification of the search you proposed to dump the fieldname for  each index:sourcetype pair?   something like: | tstats values(field) as Field, count where index=* AND sourcetype=* by index, sourcetype
Hi team, I have raw data with status: 200, 404, 503. 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.ama... See more...
Hi team, I have raw data with status: 200, 404, 503. 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.amazonaws.com | app | 404 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.amazonaws.com | app | 200 183080267.ap-southeast-1.elb.amazonaws.com | app | 503 I want to calculate total time with error request (status!=200) by dns. Please help me!!! Thanks.
Hi, I'm trying to drill down to a panel with depends="$some token$" from the Number Display wiz, but instead to show/hide panel according to token, it opens a search. I'm having trouble to make e... See more...
Hi, I'm trying to drill down to a panel with depends="$some token$" from the Number Display wiz, but instead to show/hide panel according to token, it opens a search. I'm having trouble to make each of the 3 panels be dependable on token in Number Display wiz.   Please assist     <dashboard> <label>Show Panel based on Click</label> <row> <panel> <viz type="number_display_viz.number_display_viz"> <search> <query>index="_internal" sourcetype="splunkd" log_level="*" | stats count by log_level</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="number_display_viz.number_display_viz.bordercolor">#ffffff</option> <option name="number_display_viz.number_display_viz.bordersize">2</option> <option name="number_display_viz.number_display_viz.colorprimary">#000000</option> <option name="number_display_viz.number_display_viz.colorprimarymode">auto</option> <option name="number_display_viz.number_display_viz.colorsecondary">#000000</option> <option name="number_display_viz.number_display_viz.colorsecondarymode">darker1</option> <option name="number_display_viz.number_display_viz.max">100</option> <option name="number_display_viz.number_display_viz.min">0</option> <option name="number_display_viz.number_display_viz.nodatacolor">#0178c7</option> <option name="number_display_viz.number_display_viz.padding">1</option> <option name="number_display_viz.number_display_viz.pulserate">4</option> <option name="number_display_viz.number_display_viz.shadowcolor">#F2F4F5</option> <option name="number_display_viz.number_display_viz.shapebordercolor">#FFFFFF</option> <option name="number_display_viz.number_display_viz.shapebordercolormode">static</option> <option name="number_display_viz.number_display_viz.shapebordersize">1</option> <option name="number_display_viz.number_display_viz.shapedropcolor">#ffffff</option> <option name="number_display_viz.number_display_viz.shapeshadow">yes</option> <option name="number_display_viz.number_display_viz.shapetexture">solid</option> <option name="number_display_viz.number_display_viz.sparkHeight">30</option> <option name="number_display_viz.number_display_viz.sparkWidth">90</option> <option name="number_display_viz.number_display_viz.sparkalign">5</option> <option name="number_display_viz.number_display_viz.sparkalignv">70</option> <option name="number_display_viz.number_display_viz.sparkcolorfill">#009DD9</option> <option name="number_display_viz.number_display_viz.sparkcolorline">#0178c7</option> <option name="number_display_viz.number_display_viz.sparkcolormodefill">auto</option> <option name="number_display_viz.number_display_viz.sparkcolormodeline">auto</option> <option name="number_display_viz.number_display_viz.sparkmin">0</option> <option name="number_display_viz.number_display_viz.sparknulls">gaps</option> <option name="number_display_viz.number_display_viz.sparkorder">no</option> <option name="number_display_viz.number_display_viz.sparkstyle">area</option> <option name="number_display_viz.number_display_viz.spinnerspeedmax">20</option> <option name="number_display_viz.number_display_viz.spinnerspeedmin">1</option> <option name="number_display_viz.number_display_viz.style">a2</option> <option name="number_display_viz.number_display_viz.subtitlealign">center</option> <option name="number_display_viz.number_display_viz.subtitlealignv">70</option> <option name="number_display_viz.number_display_viz.subtitlecolor">#5C6773</option> <option name="number_display_viz.number_display_viz.subtitlecolormode">static</option> <option name="number_display_viz.number_display_viz.subtitledrop">yes</option> <option name="number_display_viz.number_display_viz.subtitledropcolor">#ffffff</option> <option name="number_display_viz.number_display_viz.subtitlesize">40</option> <option name="number_display_viz.number_display_viz.textalign">center</option> <option name="number_display_viz.number_display_viz.textalignv">60</option> <option name="number_display_viz.number_display_viz.textcolor">#000000</option> <option name="number_display_viz.number_display_viz.textdrop">no</option> <option name="number_display_viz.number_display_viz.textdropcolor">#ffffff</option> <option name="number_display_viz.number_display_viz.textduration">300</option> <option name="number_display_viz.number_display_viz.textmode">static</option> <option name="number_display_viz.number_display_viz.textprecision">1</option> <option name="number_display_viz.number_display_viz.textsize">100</option> <option name="number_display_viz.number_display_viz.textunitposition">after</option> <option name="number_display_viz.number_display_viz.textunitsize">50</option> <option name="number_display_viz.number_display_viz.thickness">50</option> <option name="number_display_viz.number_display_viz.thresholdcol1">#1a9035</option> <option name="number_display_viz.number_display_viz.thresholdcol2">#d16f18</option> <option name="number_display_viz.number_display_viz.thresholdcol3">#b22b32</option> <option name="number_display_viz.number_display_viz.thresholdcol4">#ffffff</option> <option name="number_display_viz.number_display_viz.thresholdcol5">#ffffff</option> <option name="number_display_viz.number_display_viz.thresholdcol6">#ffffff</option> <option name="number_display_viz.number_display_viz.thresholdsize">20</option> <option name="number_display_viz.number_display_viz.thresholdval2">70</option> <option name="number_display_viz.number_display_viz.thresholdval3">90</option> <option name="number_display_viz.number_display_viz.titlealign">center</option> <option name="number_display_viz.number_display_viz.titlealignv">30</option> <option name="number_display_viz.number_display_viz.titlecolor">#edd051</option> <option name="number_display_viz.number_display_viz.titlecolormode">static</option> <option name="number_display_viz.number_display_viz.titledrop">no</option> <option name="number_display_viz.number_display_viz.titledropcolor">#ffffff</option> <option name="number_display_viz.number_display_viz.titlesize">65</option> <option name="refresh.display">progressbar</option> <drilldown> <condition field="INFO"> <set token="tokINFO">true</set> <unset token="tokWARN"></unset> <unset token="tokERROR"></unset> </condition> <condition field="WARN"> <unset token="tokINFO"></unset> <set token="tokWARN">true</set> <unset token="tokERROR"></unset> </condition> <condition field="ERROR"> <unset token="tokINFO"></unset> <unset token="tokWARN"></unset> <set token="tokERROR">ERROR</set> </condition> </drilldown> </viz> </panel> </row> <row> <panel depends="$tokINFO$"> <table> <title>INFO details</title> <search> <query>index=_internal sourcetype=splunkd log_level="INFO" | timechart count by component</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$tokWARN$"> <table> <title>WARN details</title> <search> <query>index=_internal sourcetype=splunkd log_level="WARN" | timechart count by component</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> </table> </panel> <panel depends="$tokERROR$"> <table> <title>ERROR details</title> <search> <query>index=_internal sourcetype=splunkd log_level="ERROR" | timechart count by component</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard>      
Hi, Alert Query to monitor CPU usage every 5 minutes and send an email if it matches 5 of 6 bad samples (i.e., if my CPU utilization is greater than 95% for 5 out of 6 intervals (each interval with... See more...
Hi, Alert Query to monitor CPU usage every 5 minutes and send an email if it matches 5 of 6 bad samples (i.e., if my CPU utilization is greater than 95% for 5 out of 6 intervals (each interval with a 5-minute gap), we need to trigger emails with High importance.  Here, we are failing to query to check the 5 of 6 bad samples. Please assist me in getting out of this situation.      
Hello, I have an add-on which fetches emails from mail server. Due to O365 limitations, the add-on's python script required modification to support OAuth authentication.   Post modification, the sc... See more...
Hello, I have an add-on which fetches emails from mail server. Due to O365 limitations, the add-on's python script required modification to support OAuth authentication.   Post modification, the script runs, however, it appears to be crashing once in a while randomly. After enabling the DEBUG for ExecProcessor, the following info is seen in the logs:   07-18-2023 15:29:34.771 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - PipelineSet 0: Got EOF from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py", uniqueId=20706 07-18-2023 15:29:34.781 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - PipelineSet 0: Ran script: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py, took 149.201812652 seconds to run, 0 bytes read 0 events read, status=done, exit=0, utime_sec=52.214078, stime_sec=7.235928, max_rss_kb=331052, vm_minor=606863, sched_vol=152097, sched_invol=3187 07-18-2023 15:29:34.781 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - PipelineSet 0: Destroying ExecedCommandPipe for "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py" id=20706 07-18-2023 15:29:34.781 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - ExecProcessorSharedState::addToRunQueue() path='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' restartTimerIfNeeded=1 07-18-2023 15:29:34.781 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - adding "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py" to runqueue 07-18-2023 15:29:34.781 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - cmd='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' Added to run queue 07-18-2023 15:29:34.781 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - Running: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py on PipelineSet 0 07-18-2023 15:29:34.781 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - PipelineSet 0: Created new ExecedCommandPipe for "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py", uniqueId=20741 07-18-2023 15:29:58.340 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - cmd='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' Not added to run queue 07-18-2023 15:30:58.337 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - cmd='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' Not added to run queue 07-18-2023 15:31:58.339 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - cmd='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' Not added to run queue 07-18-2023 15:32:58.339 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - cmd='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' Not added to run queue 07-18-2023 15:33:58.340 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - cmd='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' Not added to run queue 07-18-2023 15:34:58.342 +0530 DEBUG ExecProcessor [1181322 ExecProcessor] - cmd='/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py' Not added to run queue   I also noticed a few processes of script in interruptible sleep state since many days.   root@splunk:~# ps aux | grep splunk root 303533 0.0 0.7 129636 121296 ? S Jul14 0:21 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py root 375980 0.0 0.4 98060 81632 ? S Jul14 0:18 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py .... root 1221826 0.0 0.0 3968 1336 ? S 18:28 0:00 /bin/sh -c /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py root 1221827 11.8 0.8 149052 140628 ? S 18:28 0:19 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py .... root 1536311 0.0 0.5 113036 96648 ? S Jul05 0:25 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py root 2278791 0.0 0.5 100092 91804 ? S Jul07 0:23 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py root 3247873 0.0 0.6 114140 106092 ? S Jul10 0:25 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py root 3704217 0.0 0.8 148820 140472 ? S Jul12 0:19 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-mailclient/bin/mail.py     Any help to debug further/fix the issue is highly appreciated. Thank you.   Regards, Arun
Hi, I want to ask if I can use generative AI to generate SPL based on my Splunk indices and the data models in those indices. The main story is being able to type in the input field what you want fr... See more...
Hi, I want to ask if I can use generative AI to generate SPL based on my Splunk indices and the data models in those indices. The main story is being able to type in the input field what you want from Splunk and then return you a usable SPL. Is this possible using the Open AI API add-on? Is there any other recommended tool?
Good day everyone, I have the below dashboard in my splunk, I have an automation set for it using html but it doesnt look like this, is it possible to get the screenshot as it is and automate it to a... See more...
Good day everyone, I have the below dashboard in my splunk, I have an automation set for it using html but it doesnt look like this, is it possible to get the screenshot as it is and automate it to a teams chat or via email every morning.  
I'm trying to find the job that had failure and success, this is being ingested into Splunk as a job log. I have to plot this as a time chart based on hours for that day. eg. Today As there is no ... See more...
I'm trying to find the job that had failure and success, this is being ingested into Splunk as a job log. I have to plot this as a time chart based on hours for that day. eg. Today As there is no criteria to say it ran successfully, I had to capture just the error in the log to say that JOB_NUMBER failed.   Sample of success log, 7/18/23 8:40:58 AM INFO Start Date and Time: 7/18/23 8:40:58 AM 7/18/23 8:40:58 AM INFO Job Number: 1000000018011 7/18/23 8:40:58 AM INFO Project Name: PROJECT XXX 7/18/23 8:40:58 AM INFO Submitted By: SSS 7/18/23 8:40:58 AM INFO Submitted From: Scheduler 7/18/23 8:40:58 AM INFO System: SERVER 7/18/23 8:40:58 AM INFO APP NAME 7/18/23 8:40:58 AM INFO Executing project 'XYZ' 7/18/23 8:40:58 AM INFO Project location: /some/where/in/NAS/file.xml 7/18/23 8:40:58 AM INFO Executing module 'Main' 7/18/23 8:40:58 AM INFO Executing task 'sftp 1.0 (Connect to SFTP)' 7/18/23 8:40:58 AM INFO Connecting to 'someserver' at port '22' as user 'XXX' 7/18/23 8:40:58 AM INFO Executing sub-task 'put' 7/18/23 8:40:58 AM INFO Setting the data type to BINARY 7/18/23 8:40:58 AM INFO 0 files were uploaded successfully 7/18/23 8:40:58 AM INFO Finished sub-task 'put' 7/18/23 8:40:58 AM INFO Closed the FTP connection 7/18/23 8:40:58 AM INFO Finished task 'sftp 1.0 (Connect to SFTP)' 7/18/23 8:40:58 AM INFO Executing task 'move 1.0 (Move uploaded files to Archive)' 7/18/23 8:40:58 AM INFO 0 files were moved successfully 7/18/23 8:40:58 AM INFO Finished task 'move 1.0 (Move uploaded files to Archive)' 7/18/23 8:40:58 AM INFO Finished module 'Main' 7/18/23 8:40:58 AM INFO Finished project 'PROJECT XXX' 7/18/23 8:40:58 AM INFO End Date and Time: 7/18/23 8:40:58 AM SPL to get the total jobs and plot to time chart for each hour of the day index=xxx sourcetype=job_sourcetype source=job_log | rex "Job Number: (?P<JOB_NUMBER>.+)" | dedup JOB_NUMBER | rex "Start Date and Time: (?P<START_DATE_TIME>.+)" | eval DATE_TIME=strftime(START_DATE_TIME,"%d/%m/%Y %H") | timechart span=1h count AS TOTAL_JOBS BY DATE_TIME   Sample log of failed job, 7/18/23 8:15:58 AM INFO Start Date and Time: 7/18/23 8:15:58 AM 7/18/23 8:15:58 AM INFO Job Number: 1000000018003 7/18/23 8:15:58 AM INFO Project Name: XYX PROJECT 7/18/23 8:15:58 AM INFO Submitted By: SOMEONE 7/18/23 8:15:58 AM INFO Submitted From: Scheduler 7/18/23 8:15:58 AM INFO System: SERVER 7/18/23 8:15:58 AM INFO APP NAME 7/18/23 8:15:58 AM INFO Executing project 'SOME PROJECT' 7/18/23 8:15:58 AM INFO Project location: /some/where/in/nas/file.xml 7/18/23 8:15:58 AM INFO Executing module 'Main' 7/18/23 8:15:58 AM INFO Executing task 'timestamp 1.0 (Current date)' 7/18/23 8:15:58 AM INFO Default system date, time, and timestamp variables have been created and/or set to the current date and time '2023-07-18 08:15:58.182' 7/18/23 8:15:58 AM INFO Finished task 'timestamp 1.0 (Current date)' 7/18/23 8:15:58 AM ERROR [1234 - Copy All Files Except Offer Pack Files 'file name/directory' not found. Full stack trace written to '1000000018003_error_1.log' 7/18/23 8:15:58 AM INFO Continuing with the next task or module, if any 7/18/23 8:15:58 AM ERROR [1235 - Copy All Files Except Offer Pack Files 'file name/directory' not found. Full stack trace written to '1000000018003_error_2.log' 7/18/23 8:15:58 AM INFO Continuing with the next task or module, if any 7/18/23 8:15:58 AM INFO Finished module 'Main' 7/18/23 8:15:58 AM INFO Finished project 'SOME PROJECT' 7/18/23 8:15:58 AM INFO End Date and Time: 7/18/23 8:15:58 AM   SPL to get the job errors and time chart for hour of the day. index=xxx sourcetype=job_sourcetype source=job_log error | rex "Job Number: (?P<JOB_NUMBER>.+)" | dedup JOB_NUMBER | rex "Submitted From: (?P<SUBMITTED_FROM>.+)" | rex "Start Date and Time: (?P<START_DATE_TIME>.+)" | eval DATE_TIME=strftime(START_DATE_TIME,"%d/%m/%Y %H") | rex max_match=0 "ERROR (?P<ERROR>.*)" | where SUBMITTED_FROM="Scheduler" | timechart span=1h count AS JOB_FAILURE BY DATE_TIME   Both the sources are from same index, source and sourcetype. I need to plot a chart with both total jobs and failures for that hour as a stacked bar chart. The struggle here is to combine these 2 queries to get a time chart showing both total jobs and failure. I tried showing up success and failure but no luck. 
Hi, In the below code for a panel on my dashboard, I am displaying whether a report/alert is being skipped. If the _time field returned from the lookup.csv is > than 20 minutes ago. I would also... See more...
Hi, In the below code for a panel on my dashboard, I am displaying whether a report/alert is being skipped. If the _time field returned from the lookup.csv is > than 20 minutes ago. I would also like to display the value of _time  as well as the message. Can this be done? <query> | inputlookup append=t Lookup.csv | eval tnow = now() | eval lastruntime_unix = _time | eval time_diff = tnow - lastruntime_unix | eval status=if(time_diff > 1200, "1", "0") | table status | rangemap field=status low=0-0 severe=1-5 default=severe | replace "0" with "Alert Run is Up to Date" in status | replace "1" with "Alert Run is Skipping" in status </query>
We are trying to push cluster bundle from cluster master to other indexers. We are getting the following error in the indexes.conf "[Critical] idx=ad_data_enrichment Volume specification required f... See more...
We are trying to push cluster bundle from cluster master to other indexers. We are getting the following error in the indexes.conf "[Critical] idx=ad_data_enrichment Volume specification required for param=tstatsHomePath" @yuanliu   
Hi, can anybody help, please? I'm using Splunk Universal Forwarder 9.0.4 (build de405f4a7979) and from 15.07.2023 I have no indexed data in Splunk. .After restart there is only 1 error: Invalid key... See more...
Hi, can anybody help, please? I'm using Splunk Universal Forwarder 9.0.4 (build de405f4a7979) and from 15.07.2023 I have no indexed data in Splunk. .After restart there is only 1 error: Invalid key in stanza [webhook] in C:\Program Files\SplunkUniversalForwarder\etc\system\default\alert_actions.conf, line 229: enable_allowlist (value: false). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' I tried this step: 1. removed  [webhook] enable_allowlist = false or 2. changed it into true nothing helped. Any advice, please?  
Since Splunk stream doesn't support M1/arm-based processors yet, are there any Splunk stream alternatives that we can use to get wire data directly from hosts? Is it also in Splunk's roadmap to su... See more...
Since Splunk stream doesn't support M1/arm-based processors yet, are there any Splunk stream alternatives that we can use to get wire data directly from hosts? Is it also in Splunk's roadmap to support M1/arm-based processors for stream?
index=mail [ | inputlookup email_users.csv | rename address AS query | fields query ] | dedup MessageTraceId | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match ... See more...
index=mail [ | inputlookup email_users.csv | rename address AS query | fields query ] | dedup MessageTraceId | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(RecipientAddress) as Recipient values(Subject) as Subject earliest(_time) AS "Earliest" latest(_time) AS "Latest" values(Status) as Status by RecipientDomain SenderAddress | eval subject_count=mvcount(Subject) | sort - subject_count | convert ctime("Latest") | convert ctime("Earliest") Hi I have a csv call email_user.csv. There are 2 columns, 1 is address another is event date. Afer the above query has been done, there should be a few results. On those results , it matches the list from address column. I want to also show the event date column from the csv which matches the result. Please help.
My event data contains the following: target: [       {         alternateId: application1        detailEntry: {         }        displayName: OpenID Connect Client        id: asdfasdf        ... See more...
My event data contains the following: target: [       {         alternateId: application1        detailEntry: {         }        displayName: OpenID Connect Client        id: asdfasdf        type: AppInstance      }      {         alternateId: unknown        detailEntry: null        displayName: Unregistered Device - Default        id: adsfasdf        type: Rule I want to do a | stats count by target.displayname but only on events that have target.type=Rule. It is possible to have more than two entries as well so cant just say always select second entry.
Hi, I have an search that is used on a dashboard that I would like tweaked. Currently this search/panel displays the variance of current hour over the same hour the week before. for example: The va... See more...
Hi, I have an search that is used on a dashboard that I would like tweaked. Currently this search/panel displays the variance of current hour over the same hour the week before. for example: The value at hour 10 on Wed 7/19/23 will be compared to the value at hour 10 on Wed 7/12/23 and give variance. Instead, I would like to compare current hour value to the value of the AVG of that same hour over the last 2 weeks (instead of compared to 1 day). For example I would like hour 10 on Wed 7/19/23 to be compared to the avg of hour 10 each day from Tues 7/18/23 to Wed 7/5/23. Current search: | tstats count where index=msexchange host=SMEXCH13* earliest=-14d@d latest=-13d@d by _time span=1h | eval hour=strftime(_time,"%H") | eval ReportKey="2weekprior" | stats values(count) as count by hour, ReportKey | append [| tstats count where index=msexchange host=SMEXCH13* earliest=-7d@d latest=-6d@d by _time span=1h | eval hour=strftime(_time,"%H") | eval ReportKey="1weekprior" | stats values(count) as count by hour, ReportKey ] | append [| tstats count where index=msexchange host=SMEXCH13* earliest=-0d@d latest=-0h@h by _time span=1h | eval hour=strftime(_time,"%H") | eval ReportKey="currentweek" | stats values(count) as count by hour, ReportKey ] | eval currenthour=strftime(_time,"%H") | xyseries hour, ReportKey, count | eval nowhour = strftime(now(),"%H") | eval comparehour = nowhour-1 |where hour<=comparehour |sort by -hour | table hour,nowhour,comparehour, currentweek,1weekprior,2weekprior |eval 1weekvar = currentweek/'1weekprior' |eval 2weekvar = currentweek/'2weekprior' |eval variance=round(((('1weekvar'+'2weekvar')/2)*100)-100,2) |table hour,variance |head 5
This happens regardless of it I am installing fresh or upgrading to version 9.1.0.1an existing install. Every action that involves the splunk binary prepends all output with: Warning: Attempting to... See more...
This happens regardless of it I am installing fresh or upgrading to version 9.1.0.1an existing install. Every action that involves the splunk binary prepends all output with: Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunkfwd /opt/splunkforwarder"   I've tried manually running that, as Root! And it still persists even though now the contents under /opt/splunkforwarder are owned by splunkfwd recursively!
Hi, we have an Indexer Cluster with a dedicated Cluster Manager. The indexers have an additional hard drive attached for the custom indexes. The cluster manager has only one hard disk. When adding a... See more...
Hi, we have an Indexer Cluster with a dedicated Cluster Manager. The indexers have an additional hard drive attached for the custom indexes. The cluster manager has only one hard disk. When adding an Index to the cluster manager's indexes.conf  I am getting this error "Failed to create directory". Does it mean the cluster manager must have the same number of  hard disks as the indexers or would it be sufficient, to create an Variable (CUSTOM_INDEX) pointing on the cluster manager to /opt/splunk-home/var/lib/splunk and on the indexers it point to a folder on the additional hard disk? thanks in advanced for sharing your wisdom Alex  
Guys, good morning I'm having trouble inverting this table below. I need to leave the horizontal "key_type" information, and the "sync_status" information on the bottom line. does anyone know how i ... See more...
Guys, good morning I'm having trouble inverting this table below. I need to leave the horizontal "key_type" information, and the "sync_status" information on the bottom line. does anyone know how i can do this? Current: | eval last_successful=strftime(strptime(last_successful_sync, "%F %T"), "%d/%m/%Y") | eval sync_status=if(sync_status == "t", ":gmud-approved:"." "."Sucesso", ":x-negative:"." "."Falhou"." -- "."Last Successful: ".'last_successful') | stats c as _c by key_type sync_status How I need it to look: EVP CPF CNPJ EMAIL PHONE 'sync_status' 'sync_status' 'sync_status' 'sync_status' 'sync_status'
Hi, Is there a way to set default values for the fields in the configure action dialog for the Create ServiceNow Incident action from the Episode review page? When we select an episode and go to Ac... See more...
Hi, Is there a way to set default values for the fields in the configure action dialog for the Create ServiceNow Incident action from the Episode review page? When we select an episode and go to Actions --> Create ServiceNow Incident, is there a way to prepopulate some of the fields or do they can to be manually completed each time? We currently have NEAPs that automatically create ServiceNow Incidents but this scenario is for episodes that require manual actions before we create the incidents. Thanks.
I have spl in splunk index=demo  search compliance= standard1 | timechart span=1week count by status  | add totals row=t enable not_enable fieldname= "total" | eval percentage = round((enable / t... See more...
I have spl in splunk index=demo  search compliance= standard1 | timechart span=1week count by status  | add totals row=t enable not_enable fieldname= "total" | eval percentage = round((enable / total ) * 100 , 0) . " %" | reverse  | table _time percentage  above spl show percentage week over week I want to show anther column show percentage different between last week and week before how do I show this next to last week row? If week before is 56% percentage and last week percentage 70% it need show next last week row 14%  how can I do this? I try join append it did not work for me. Thanks in advance