All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, we are on Splunk cloud. On one of our dashboard panels,  I am getting a warning message: [idx-xxxx field 'technique_id' does not exist in the data]  Interestingly this wasn't the case until... See more...
Hi All, we are on Splunk cloud. On one of our dashboard panels,  I am getting a warning message: [idx-xxxx field 'technique_id' does not exist in the data]  Interestingly this wasn't the case until last week.  Pls see below screenshots.  The search runs by default in Fast mode as its a dashboard query. It populates data in a panel.  While troubleshooting, if  i run it manually in verbose mode, the field technique_id exists under "Interesting fields".  Why is fast mode throwing that warning and how to get rid of it?   Is this an indexing issue with one of those indexers?      
Hi Team, how can I check 7 years old data that means the first ingestion was on 26 dec of 2016 I need total data size from starting date to Jun 30 2023. I have tried with following Query ,when I ru... See more...
Hi Team, how can I check 7 years old data that means the first ingestion was on 26 dec of 2016 I need total data size from starting date to Jun 30 2023. I have tried with following Query ,when I run that its showing some  1."DAG Execution Exception Error ":search has cancelled  2.search Auto-cancelled  the Query which I have used  index=wineventlog source=security command_type!="METER_ALERT" |eval size=len(_raw) | eval raw_len_KB= round(size/1024,3) | eval raw_len_MB = round(size/1024/1024,3) | eval raw_len_GB = round(size/1024/1024/1024,3) | table size,raw_len_KB,raw_len_MB ,raw_len_GB,index | stats count sum(size) as Bytes sum(raw_len_KB) as KB sum(raw_len_MB) as MB sum(raw_len_GB) as GB by index please help on this ? Thanks In Advance  Bala
Hello, I'm kinda new om splunk. I have this code that the goal is to show the details from a bar graph when clicking in a specific stacked value. Using drilldowns, got the tokens date4=$click.value... See more...
Hello, I'm kinda new om splunk. I have this code that the goal is to show the details from a bar graph when clicking in a specific stacked value. Using drilldowns, got the tokens date4=$click.value$ and yaxis4=$click.name2$.   For some reason I can't get the details from the graph, showing only "No Results" message. Tried to modify my search but without success. What am I doing wrong here in the search? Do I have to change what to get the result that I need? index="" host= sourcetype=csv source=C:\\.....\\* | dedup ID Freeze source | table ID Title | search SWVersion=$sw_version_filter1$ | eval YYYY_CW_DD=split(source,"\\") | eval YYYY_CW_DD=substr(mvindex(YYYY_CW_DD, mvcount(YYYY_CW_DD)-1),1,11) | where Freeze>="2023-01-01" | eval Freeze=substr(mvindex(Freeze, mvcount(pFreeze)-1),1,10) | eval test=if(LCS="New" or LCS="Evaluated",1,0) | eval test1=if((LCS="x" or LCS="Implemented" or LCS="y") and (ifdLCS="m" or ifdLCS="n"), 1,0) | eval test2=if((LCS="x" or LCS="Implemented" or LCS="y") and (iswCC="p") and (iswCQR!="q" or iswCQR!="r"), 1,0) | where Freeze="$date4$"    
Hi, I'm working with a large amount of data. I wrote a main report that extracts all events (let's call them events A,B,C,D) from the last 30 days and do some manipulations for fields. And then ... See more...
Hi, I'm working with a large amount of data. I wrote a main report that extracts all events (let's call them events A,B,C,D) from the last 30 days and do some manipulations for fields. And then i wrote 5 reports that filter the main saved report by events type and get only the relevant fields for each event: For example- the report for event A contain all fields relevant for event A,  report for event B contains all fields relevant for event B and etc.  My dashboard contains 5 tabs, one for each event (tab 1 for report A, tab 2 for report B,..), and triggers the relevant saved search report (reports A/B/C,..) Problems- all the reports run very slow My questions: 1. How to read only delta data each time? i mean, how to not read 30 days each time at once, if the query was already run today and i execute it one more time it should read only new data and use the history data that have already read in the previous run. 2. i read a bit about summary index. my reports extract all fields and not aggregate data. how to create my 6 reports (main+5 others) with summary index? As i said, - i use table command and not functions like top,count,.. in my query (my reports just extract relevant fields with some naming manipulations) * in case that you would recommend to use summary index i will appreciate if you could provide me example code, because i have 6 reports and not sure how work with summary index  thanks, Maayan
Hello, is there a way to open Splunk ITSI glass table in full screen mode by default? I want to open glass table on a TVs that has no keyboard connected, so I can't click full screen button.
Hi, i have an 500 internal server error when accessed licensing  page, and this is show on the log: 2023-08-07 13:39:01,536 ERROR [64d09185547fa7f0295810] __init__:370 - Mako failed to render: ... See more...
Hi, i have an 500 internal server error when accessed licensing  page, and this is show on the log: 2023-08-07 13:39:01,536 ERROR [64d09185547fa7f0295810] __init__:370 - Mako failed to render:  Traceback (most recent call last):   File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/__init__.py", line 366, in render_template     return templateInstance.render(**template_args)   File "/opt/splunk/lib/python3.7/site-packages/mako/template.py", line 476, in render     return runtime._render(self, self.callable_, args, data)   File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 883, in _render     **_kwargs_for_callable(callable_, data)   File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 920, in _render_context     _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)   File "/opt/splunk/lib/python3.7/site-packages/mako/runtime.py", line 947, in _exec_template     callable_(context, *args, **kwargs)   File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 15, in render_body     <%self:render/>   File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 21, in render_render     <%self:pagedoc/>   File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/base.html", line 95, in render_pagedoc     <%next:body/>   File "/opt/splunk/share/splunk/search_mrsparkle/templates/layout/admin_lite.html", line 92, in render_body     ${next.body()}   File "/opt/splunk/share/splunk/search_mrsparkle/templates/licensing/overview.html", line 209, in render_body     % if hard_messages['cle_pool_over_quota'] is not None and hard_messages['cle_pool_over_quota']['count'] is not None and hard_messages['cle_pool_over_quota']['count'] >= stack_table[0]['max_violations']: KeyError: 'cle_pool_over_quota'     any help for that?
Hi all, I want to send specific logs from one Heavy Forwarder to another heavy Forwarder. I don't want to send a full logs i just need to send One of sourcetypes to another Heavy Forwarder.Version ... See more...
Hi all, I want to send specific logs from one Heavy Forwarder to another heavy Forwarder. I don't want to send a full logs i just need to send One of sourcetypes to another Heavy Forwarder.Version of splunk is 9.0.4 How can I do that? Thx.
Hi All,   i have a row in my dashboard which consists of multiple panels in one line as shown below.  Is there a way to put a horizontal scroll bar  at the bottom of this row so that it allow me to s... See more...
Hi All,   i have a row in my dashboard which consists of multiple panels in one line as shown below.  Is there a way to put a horizontal scroll bar  at the bottom of this row so that it allow me to slide left to right and that way i can enlarge these panels ?  My requirement is to make all the 10 boxes (panel) fit in the same row and make them readable and slightly bigger.  Autosize ends up shrinking some words. The xml code for above is something like this:   <row> <panel> <single> <title>Initial Access</title> <search> <query></query> </search> <option name="colorBy">value</option> <option name="colorMode">none</option> <option name="drilldown">all</option> <option name="height">125</option> <option name="rangeColors">["0x000","0xd93f3c"]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> </single> </panel> <panel> <single> <title>Execution</title> .... </single> </panel> .... </row>   I am aware that by including overflow: auto  inside a HTML block of a panel will show the scroll bar for that individual panel.  But in my case, i need a common scroll bar for all the panels in the same row. Hope i am clear  
Hi Team, I could see logs coming from UNIX devices in the below format   <38>Aug 1 13:20:29 dns.customer.net 10.32.9.5 sshd[14171]: Failed password for michal from 10.32.7.28 port 58255 ssh2 ... See more...
Hi Team, I could see logs coming from UNIX devices in the below format   <38>Aug 1 13:20:29 dns.customer.net 10.32.9.5 sshd[14171]: Failed password for michal from 10.32.7.28 port 58255 ssh2   When i look into the selected events on the left panel these logs are not getting parse, like username, source ip , port, protocol. Any suggestion please. Logs are coming through rsyslog mechanism using TCP input from the device
Hi Everyone, I have an search query and a lookup. Search query gives some filenames and their time of creation and in my lookup which is from external team contains the same filename and transf... See more...
Hi Everyone, I have an search query and a lookup. Search query gives some filenames and their time of creation and in my lookup which is from external team contains the same filename and transfer time of that file in our system. First search works on eventtype-> Filename and Time of creation are the table fields Second lookup data fields-> Filename and Transfer time I want to combine the data of eventype search and lookup and display into one single table with combined fields like  TIme of creation Filename Transfer Time  How to achieve this I am pretty stuck on this issue
I got IDs logs with Windows that I want to be able to identify and see what details that they present. 
hello i installed Splunk soar successfully on CentOS 7 entered the ip address and port and it opened for me Splunk soar logging page i entered the credentials i remeber used when installing the... See more...
hello i installed Splunk soar successfully on CentOS 7 entered the ip address and port and it opened for me Splunk soar logging page i entered the credentials i remeber used when installing the Splunk but with failure i tried default credentials admin password didn't work as well tried to reset by using this command : in the <home_directory>/www phenv python manage.py changepassword admin with no success and replaced admin with phantom, didnt work too! any body can help me in solving this issue ?   many thanks 
Is there a way we can run selected correlation searches in a certain time-frame at once or in queue? Use Case: In case something happens to the SIEM / Pipeline and the searches are not performed a... See more...
Is there a way we can run selected correlation searches in a certain time-frame at once or in queue? Use Case: In case something happens to the SIEM / Pipeline and the searches are not performed and SOC doesn't get alerts. We can use the query to run the selected correlation searches in the selected time-frame via ES Search app. Is that possible?  Also, if it is. Can we create a dashboard out of it for ease of use.
Hello, I am attempting to create a playbook where I use a sha256 hash as an input parameter and submit a sample to virus total. Then I want to take that sample and create a service now ticket with t... See more...
Hello, I am attempting to create a playbook where I use a sha256 hash as an input parameter and submit a sample to virus total. Then I want to take that sample and create a service now ticket with the sample attached. Can anyone point me to a community playbook with this use case or give me an example or how to create a service now ticket in splunk soar as an action block ? 
Hey All,  I'm trying to implement tokens in my base-search dashboard. But it seems like when I'm changing the token value it has no effect on the actual table I'm using. I'll be glad if someone may ... See more...
Hey All,  I'm trying to implement tokens in my base-search dashboard. But it seems like when I'm changing the token value it has no effect on the actual table I'm using. I'll be glad if someone may have an idea of what needs to be changed in order for it to work. This is the Dashboard script:  <form version="1.1" theme="light"> <label>Cloud One V2</label> <search id="CloudOne_base"> <query>index=client* sourcetype=trendmicro:cloudone | fields _time bv_src_ip, bv_src_dvc_hostname, bv_user, name, bv_vendor_reason ,bv_severity, target, bv_vendor_result</query> <earliest>$_time.earliest$</earliest> <latest>$_time.latest$</latest> <refresh>2m</refresh> <refreshType>delay</refreshType> <done> <set token="bv_src_ip">$ip$</set> <set token="bv_src_dvc_hostname">$host$</set> <set token="bv_user">$user$</set> <set token="bv_severity">$severity$</set> </done> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="_time"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="ip" searchWhenChanged="true"> <label>Source IP</label> <choice value="*">*</choice> <valuePrefix>bv_src_ip="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>bv_src_ip</fieldForLabel> <fieldForValue>bv_src_ip</fieldForValue> <search base="CloudOne_base"> <query>| fields bv_src_ip | dedup bv_src_ip | sort bv_src_ip</query> </search> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>*</initialValue> </input> <input type="multiselect" token="host" searchWhenChanged="true"> <label>Source Hostname</label> <choice value="*">*</choice> <default>*</default> <valuePrefix>bv_src_dvc_hostname="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>bv_src_dvc_hostname</fieldForLabel> <fieldForValue>bv_src_dvc_hostname</fieldForValue> <search base="CloudOne_base"> <query>| stats count by bv_src_dvc_hostname | dedup bv_src_dvc_hostname | sort bv_src_dvc_hostname</query> </search> <initialValue>*</initialValue> <prefix>(</prefix> <suffix>)</suffix> </input> <input type="multiselect" token="user" searchWhenChanged="true"> <label>User</label> <choice value="*">*</choice> <default>*</default> <valuePrefix>bv_user="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>bv_user</fieldForLabel> <fieldForValue>bv_user</fieldForValue> <search base="CloudOne_base"> <query>| stats count by bv_user | dedup bv_user | sort bv_user</query> </search> <initialValue>*</initialValue> </input> <input type="checkbox" token="severity" searchWhenChanged="true"> <label>Severity</label> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>bv_severity="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>bv_severity</fieldForLabel> <fieldForValue>bv_severity</fieldForValue> <search base="CloudOne_base"> <query>| stats count by bv_severity | dedup bv_severity | sort bv_severity</query> </search> <default>critical,high,informational,medium</default> <initialValue>critical,high,informational,medium</initialValue> </input> </fieldset> <row> <panel> <title>All Traffic Data</title> <table> <search base="CloudOne_base"> <query>| where bv_src_ip!="-" | table _time, bv_src_ip, bv_src_dvc_hostname, bv_user, name, bv_vendor_reason ,bv_severity, target, bv_vendor_result | rename bv_src_ip as "Source IP", bv_src_dvc_hostname as "Source Host". name as "Alert_Name", bv_vendor_reason as "Description", bv_severity as "Severity" , bv_user as "User", bv_vendor_result as "Full Description", target as "Target Host"</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>
I'm using Pfsense as FreeBSD OS, I want to monitor basic performance metrices like RAM, CPU usage,.. Is there any Splunk app available out there that supports FreeBSD to perform above task ? Thanks a... See more...
I'm using Pfsense as FreeBSD OS, I want to monitor basic performance metrices like RAM, CPU usage,.. Is there any Splunk app available out there that supports FreeBSD to perform above task ? Thanks all.
Hello, I'm trying to figure out the best way to report/alert on active directory change events. I have admon/event forwarding set up on our DCs (admon on just one). I need to be able to alert on ... See more...
Hello, I'm trying to figure out the best way to report/alert on active directory change events. I have admon/event forwarding set up on our DCs (admon on just one). I need to be able to alert on group changes - which is relatively easy to set up alerts for However I also need to be able to alert when someone moves one of a specific list of users from one OU to another. What I make a change like that, I can see the event in splunk from admon, but it just lists the objects properties. I can figure out what changed by looking previous event for the object and compare a field with streamstats - but that's assuming I know what to compare, and I won't always know what changed. So what's the best way to get this done? How can I alert that "x admin moved y user from OU-A to OU-B"?
HI, i'm facing one of the issue on my heavy forwarder is not able to get the logs on 9997, where we have already configured the UF for port 9997 and receiving port 9997 on HF, but from the source ... See more...
HI, i'm facing one of the issue on my heavy forwarder is not able to get the logs on 9997, where we have already configured the UF for port 9997 and receiving port 9997 on HF, but from the source to HF we telnet the HF on port 9997, for the first time it gets connected very next time it failed to connect. due to this we have not receving UF logs on the
Hi Team, My raw logs are below: 2023-08-04 10:06:12.750 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, t... See more...
Hi Team, My raw logs are below: 2023-08-04 10:06:12.750 [INFO ] [Thread-3] AssociationProcessor - compareTransformStatsData : statisticData: StatisticData [selectedDataSet=0, rejectedDataSet=0, totalOutputRecords=17897259, totalInputRecords=0, fileSequenceNum=0, fileHeaderBusDt=null, busDt=08/03/2023, fileName=SETTLEMENT_TRANSFORM_MERGE, totalAchCurrOutstBalAmt=0.0, totalAchBalLastStmtAmt=0.0, totalClosingBal=8.787189909105E10, sourceName=null, version=1, associationStats={}] ---- controlFileData: ControlFileData [fileName=SETTLEMENT_TRANSFORM_ASSOCIATION, busDate=08/03/2023, fileSequenceNum=0, totalBalanceLastStmt=0.0, totalCurrentOutstBal=0.0, totalRecordsWritten=17897259, totalRecords=0, totalClosingBal=8.787189909105E10] I want to fetch both files my current query is:   index="abc*" sourcetype =600000304_gg_abs_ipc2 " AssociationProcessor - compareTransformStatsData : statisticData: StatisticData" source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" | rex " AssociationProcessor - compareTransformStatsData : statisticData: StatisticData totalOutputRecords=(?<totalOutputRecords>),busDt=(?<busDt>),fileName=(?<fileName>),totalClosingBal=(?<totalClosingBal>)" | eval TotalClosingBalance=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1))) | table busDt fileName totalOutputRecords TotalClosingBalance | sort busDt | appendcols [ search index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 " AssociationProcessor* associationStats={}] ---- controlFileData: ControlFileData" source="/amex/app/gfp-settlement-transform/logs/gfp-settlement-transform.log" | rex " AssociationProcessor* - associationStats={}] ---- controlFileData: ControlFileData ,busDate=(?<busDate>),fileSequenceNum=(?<fileSequenceNum>),totalRecordsWritten=(?<totalRecordsWritten>),totalRecords=(?<totalRecords>),totalClosingBal=(?<totalClosingBal>)" | rex "fileName=(?<fileName>SETTLEMENT_TRANSFORM_ASSOCIATION)" | eval TotalClosingBalance=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1))) | table busDate busDate fileName totalRecordsWritten TotalClosingBalance] | sort busDt     But I am getting file name as SETTLEMENT_TRANSFORM_MERGE But I want both SETTLEMENT_TRANSFORM_MERGE and SETTLEMENT_TRANSFORM_ASSOCIATION both please guide
So, this PCRE regex works in testers, but not on Splunk.    ^((http[s]?):\/)?\/?([^:\/\s]+)((\w+)*\/){2})   Should return https://someurl.com/first/ BUT in Splunk search, this:   rex fi... See more...
So, this PCRE regex works in testers, but not on Splunk.    ^((http[s]?):\/)?\/?([^:\/\s]+)((\w+)*\/){2})   Should return https://someurl.com/first/ BUT in Splunk search, this:   rex field=referer "referer=(?<referer>^((http[s]?):\/)?\/?([^:\/\s]+)((\w+)*\/){2})   is returning the entire url, i.e., https://someurl.com/first/second/third/fourth/etc What's the proper way to get what I'm looking for? Confused that this works in testers but not Splunk.