All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I have the following pattern of logs and I'm trying to use rex to filter the values. I started doing it like this:  | rex field=_raw "attr_itx_media_type\s(?<midias>.*)" I nee... See more...
Hello everyone, I have the following pattern of logs and I'm trying to use rex to filter the values. I started doing it like this:  | rex field=_raw "attr_itx_media_type\s(?<midias>.*)" I need to get everything between "" Ex. voicePerdida, smscustom, email     attr_itx_media_type [str] = "voicePerdida" attr_itx_media_type [str] = "smsCustom" attr_itx_media_type [str] = "email"      
Please help me in resetting token value to "*" on the couple of token from button click.     <html> <center> <div> <button id="buttonId" class="btn btn-primary">Res... See more...
Please help me in resetting token value to "*" on the couple of token from button click.     <html> <center> <div> <button id="buttonId" class="btn btn-primary">Reset</button> </div> </center> </html> require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function ($, mvc) { var tokens = mvc.Components.get("default"); $('#buttonId').on("click", function (e){ tokens.set("SELECTED_PU", "*"); tokens.set("SELECTED_PL", "*"); }); });  
I need to reject or not index the logs that have the word "notice" inside the log I understand that it is done using these two files I have 2 doubts: 1. Is the regex ok? 2. If the path is cons... See more...
I need to reject or not index the logs that have the word "notice" inside the log I understand that it is done using these two files I have 2 doubts: 1. Is the regex ok? 2. If the path is constantly changing I can use a wildcard? [source::/folder/folder/logs/firewall-xxxxx/* ] props.conf [source::/folder/folder/logs/firewall-xxxxx/2020/12/4/local7.log] TRANSFORMS-null= setnull transforms.conf [setnull] REGEX = notice DEST_KEY = queue FORMAT = nullQueue Sample Log date=2019-05-10 time=11:37:47 logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom1" eventtime=1557513467369913239 srcip=10.1.100.11 srcport=58012 srcintf="port12" srcintfrole="undefined" dstip=23.59.154.35 dstport=80 dstintf="port11" dstintfrole="undefined" srcuuid="ae28f494-5735-51e9-f247-d1d2ce663f4b" dstuuid="ae28f494-5735-51e9-f247-d1d2ce663f4b" poluuid="ccb269e0-5735-51e9-a218-a397dd08b7eb" sessionid=105048 proto=6 action="close" policyid=1 policytype="policy" service="HTTP" dstcountry="Canada" srccountry="Reserved" trandisp="snat" transip=172.16.200.2 transport=58012 appid=34050 app="HTTP.BROWSER_Firefox" appcat="Web.Client" apprisk="elevated" applist="g-default" duration=116 sentbyte=1188 rcvdbyte=1224 sentpkt=17 rcvdpkt=16 utmaction="allow" countapp=1 osname="Ubuntu" mastersrcmac="a2:e9:00:ec:40:01" srcmac="a2:e9:00:ec:40:01" srcserver=0 utmref=65500-742      
Hi Everyone, I have one requirement as below: I have one dashboard which consists of drop down and panels. I have one drop down as "Teams" whose code is as below: <input type="multiselect" token=... See more...
Hi Everyone, I have one requirement as below: I have one dashboard which consists of drop down and panels. I have one drop down as "Teams" whose code is as below: <input type="multiselect" token="teams" searchWhenChanged="true"> <label>Teams</label> <choice value="All">All Teams</choice> <choice value="BLAZE">BLAZE</choice> <choice value="Oneforce">Oneforce</choice> <fieldForLabel>Teams</fieldForLabel> <prefix>(</prefix> <valuePrefix>Teams ="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <suffix>)</suffix> <initialValue>All</initialValue> <default>All</default> <change> <eval token="form.teams">case(mvcount('form.teams')=0,"All",mvcount('form.teams')&gt;1 AND mvfind('form.teams',"All")&gt;0,"All",mvcount('form.teams')&gt;1 AND mvfind('form.teams',"All")=0,mvfilter('form.teams'!="All"),1==1,'form.teams')</eval> <eval token="BLAZE">if(isnull(mvfind('form.teams',"BLAZE")),mvfind('form.teams',"All"),1)</eval> <eval token="Oneforce">if(isnull(mvfind('form.teams',"Oneforce")),mvfind('form.teams',"All"),1)</eval> <eval token="org_choice">if(mvfind('form.teams',"All")=0,$teams$)</eval> </change> </input> I have one panel which is showing multiple fields including parent chain and parent chain is coming from inputlookup. parent_chain MAIN-->root-->BLAZE - E1-->Blaz Transformation - Data MAIN-->root-->BLAZE - E3 MAIN-->root-->Oneforce-->FXIP Below is the code of the panel. The parent chain is coming from inputlookup chains.csv . what I want is when I select "BLAZE" from the teams drop-down all the parent chain which consists of 3rd word as "BLAZE" should be shown .Basically the parent chain which include the word as "BLAZE" should come. Like below: MAIN-->root-->BLAZE - E1-->Blaz Transformation - Data MAIN-->root-->BLAZE - E3 when I select the word "Oneforce" from teams drop-down all the parent chain which consists of word "Oneforce" should come like this: MAIN-->root-->Oneforce-->FXIP MAIN-->root-->Oneforce-->Support_Tools And when I select "All Teams" It should show all the parent chains . I have passed the tokens as $BLAZE$ OR $Oneforce$ in query but still result is not filtering. When I am selecting "BLAZE" from Teams dropdown its still showing all the parent chains and when I am selecting "Oneforce" from Teams dropdown its still showing all the parent chains . Below is the code for it for the panel. I have highlighted the tokens which I have passed and also the lookup(inputlookup chains.csv) from where parentchain is coming. <row> <table> <search> <query>index=abc sourcetype=xyz source="/user.log" process-groups $BLAZE$ OR $Oneforce$|rename count as "Request Counts" |rex field=Request_URL "(?&lt;id&gt;[A-Za-z0-9]{8}[\-][A-Za-z0-9]{4}[\-][A-Za-z0-9]{4}[\-][A-Za-z0-9]{4}[\-][A-Za-z0-9]{12})"|stats count by Date ADS_Id Request_Type id ClickHere Request_URL|sort - ADS_Id |join type=outer id [inputlookup chains.csv]</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <fields>"Date", "ADS_Id","Request_Type", "Request_URL", "id", "parent_chain"</fields> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </row> Can someone please guide me on this. Thanks in advance.
hello The search below works fine except that the onlinecount fields is blocked to 10000   `OnOff` | stats latest(_time) as _time by host | eval DiffInSeconds = (now() - _time) | eval DiffInMin... See more...
hello The search below works fine except that the onlinecount fields is blocked to 10000   `OnOff` | stats latest(_time) as _time by host | eval DiffInSeconds = (now() - _time) | eval DiffInMinutes=DiffInSeconds/60 | eval Status=if(DiffInSeconds<3601, "Online", "Offline") | eval EventCreatedTime=strftime(_time,"%d-%b-%Y %H:%M:%S %p %Z" ) | table host EventCreatedTime DiffInMinutes Status | sort -EventCreatedTime | eval Code = if(like(Status,"Online"), "Online", "Offline") | lookup host_OnOff.csv HOSTNAME as host output SITE DEPARTMENT RESPONSIBLE_USER | stats dc(host) AS OnlineCount by Code | where Code = "Online" | appendcols [| inputlookup host_OnOff.csv | rename HOSTNAME as host | search SITE=* | search RESPONSIBLE_USER=* | stats dc(host) as NbIndHost] | fields OnlineCount NbIndHost | eval OnlineCount = if(OnlineCount> 0, tostring(OnlineCount), "") + " / " + NbIndHost + " machines "   host_OnOff.csv is updated automatically from the scheduled search below : | inputlookup fo_all | table HOSTNAME SITE CATEGORY RESPONSIBLE_USER DEPARTMENT | outputlookup host_OnOff.csv how to avoid this please??
Hello, I have a problem where fields are not showing on the Field Sidebar when i run a search against certain indexes/sourcetypes. I have two Search Heads. When I run the same search on both SH's, t... See more...
Hello, I have a problem where fields are not showing on the Field Sidebar when i run a search against certain indexes/sourcetypes. I have two Search Heads. When I run the same search on both SH's, the fields displayed on Field Sidebar are different. I have ensured that Verbose mode is selected and that I am selecting "All Fields" in the Field selector popup. The search returns the same count of events and I can confirm the fields are being extracted. Field Extraction was performed months ago. The search term is index="mimecast" sourcetype="mimecastsiemst" mcType=email_ttp_url. If I run this search one SH,  the "recipient" field is displayed, as an example. But if I run the search on the other SH, it is not displayed. I have also noticed that if I exclude sourcetype="mimecastsiemst"  from the search on the SH that is displaying this field, and rerun the search - the field is no longer displayed on the Field Sidebar. There are other fields that act in the same way. Can someone please provide help on why this is happening and how I can have searches from both SHs to return all the extracted fields. Thanks!
hi , the format of the field is "errorOccuredOn":"2020-12-04+01:00" and i want to extract only the date i.e. only 2020-12-04 should be coming into the result. Let me know how this can be achieved.
Global IBM ISIM/ISAM question to the Splunkers: In what ways can Splunk integrate in the field of authentication and authorization (IBM ISIM/ISAM). Many thanks, Pieter
I use the timechart to analyze the data and I want to normalize the data in the timechart ... | timechart span=3d count by step What I get is like: _time 1 2 3  11/2  2 3 4 11/3  3 4 5 And I wa... See more...
I use the timechart to analyze the data and I want to normalize the data in the timechart ... | timechart span=3d count by step What I get is like: _time 1 2 3  11/2  2 3 4 11/3  3 4 5 And I want to divide the max value of each row and get like this: _time 1 2 3  11/2  0.5 0.75 1 11/3  0.6 0.8 1 How can I get this? Thank a lot in advance
We are trying to predict the infrastructure details such as cpu memory disk utilization using ML Tool kit. To predict the next/future one day data ,how much volume of history data needs to be consid... See more...
We are trying to predict the infrastructure details such as cpu memory disk utilization using ML Tool kit. To predict the next/future one day data ,how much volume of history data needs to be considered or trained. Any suggestions? Or please share any relevant docs on the same. Which is the best model/algorithm can be considered to predict the future outcome.How to test if the predicted value is correct.
  search task info Unable to distribute to peer xxxx at uri xxxx using the uri-shema = beacause https peer has status = 2 Verify uri-scheme,connectivity to eht serch peer,that eht search peer is u... See more...
  search task info Unable to distribute to peer xxxx at uri xxxx using the uri-shema = beacause https peer has status = 2 Verify uri-scheme,connectivity to eht serch peer,that eht search peer is up,and that an adequate level of system resources are available splunkd.log:  /search/job/ xxxxx/search.log : Connection closed by pee systemctl restart Splunk  output info: Kernel NMI watchdog BUG soft lockup CPU#18 stuck for 22s
how to extract date using rex command ? format is "time":"2020-12-04+01:00"
Hello dear community. I'm a beginner on Splunk. I would like to have your help today on a project that I am doing. I have to calculate the availability of application services. I have an entry from ... See more...
Hello dear community. I'm a beginner on Splunk. I would like to have your help today on a project that I am doing. I have to calculate the availability of application services. I have an entry from a database using Splunk DB connect. in these data I receive all the events listed in a DB of a monitoring Tools. I would like to calculate via the timestamp the duration of an incident between the moment when the status is failed and the return to normal. it is difficult because an event can occur several times in the day so I have to find a foreach which will read a line with a severity 2 "critical" and its return to normal for this line with severity 0 "OK" using the timestamp because its return to normal occurs after of course. I don't know if I managed to explain the problematic. thank you for your precious help.      
Hi All, I am trying to replace values which are already fields present in another field using rex and mode = sed.  Example I have two fields ID and error and both of them are Dynamic and Unique. So... See more...
Hi All, I am trying to replace values which are already fields present in another field using rex and mode = sed.  Example I have two fields ID and error and both of them are Dynamic and Unique. Some ID's are found in the error field. Now I would like to replace the different ID's in the error field with a common String. | rex field=error mode=sed "s/ID/ My Id/g" If I type in a regular expression  it replaces a few but it is not very accurate and  that is why  I want to make use of the field which is Id itself. However it is not able to replace. Kindly assist on how to do the same. Thank you!
Hello everyone, I am aware that the question was asked several times before, but the last one I found is more than 2 years old, so I wanna give it one more try: We currently try to get all data fro... See more...
Hello everyone, I am aware that the question was asked several times before, but the last one I found is more than 2 years old, so I wanna give it one more try: We currently try to get all data from FMC to Splunk - the way to do it is the "Cisco eStreamer eNcore for Splunk" App - but our Splunk environment is running on Windows server. So is there any way (without writing all the .sh from scratch in .ps) to get that up and running? Thanks a lot for the Feedback!
Hello,  I am new in Splunk and apologize  if this problem was somewhere solved, and for my english too. The problem is that alert triggers normally, but don't send email The error message in lo... See more...
Hello,  I am new in Splunk and apologize  if this problem was somewhere solved, and for my english too. The problem is that alert triggers normally, but don't send email The error message in logs "12-04-2020 10:23:44.791 +0300 WARN Pathname - Pathname 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py "results_link=http://***:8000/app/search/@go?sid=rt_scheduler__admin__search__RMD59adeaae1a7c6404f_at_1607065450_42.22" "ssname=Authentification failure" "graceful=True" "trigger_time=1607066624" results_file="C:\Program Files\Splunk\var\run\splunk\dispatch\rt_scheduler__admin__search__RMD59adeaae1a7c6404f_at_1607065450_42.22\results.csv.gz" "is_stream_malert=False"' larger than MAX_PATH, callers: call_sites=[0xca9a9d, 0xcab8c1, 0x14e61c2, 0x14e2f4d, 0x13677b3, 0x12f9276, 0x6b69d5, 0x6b5ffe, 0x6a4532, 0xa6142f, 0xd1ae0e] 12-04-2020 10:24:11.364 +0300 ERROR ScriptRunner - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py "results_link=http://***:8000/app/search/@go?sid=rt_scheduler__admin__search__RMD59adeaae1a7c6404f_at_1607065450_42.22" "ssname=Authentification failure" "graceful=True" "trigger_time=1607066624" results_file="C:\Program Files\Splunk\var\run\splunk\dispatch\rt_scheduler__admin__search__RMD59adeaae1a7c6404f_at_1607065450_42.22\results.csv.gz" "is_stream_malert=False"': ERROR:root:[WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond while sending mail to: ***@***.ru" alert_actions.conf: [email] auth_password = *** auth_username = ***@gmail.com pdf.header_left = none pdf.header_right = none use_tls = 1 mailserver = smtp.gmail.com:587   I've tried to use ssl as tls, tried to use different ports (587 and 465) In gmail settings I have enabled IMAP access and allowed to use secure apps. So you have any idea what the problem could be? Thank you
Hi, I am trying to remove elements from XML in a log file using the heavy forwarder via transforms.conf Tried several variants, this one has come close but is only creating a single  instance of wh... See more...
Hi, I am trying to remove elements from XML in a log file using the heavy forwarder via transforms.conf Tried several variants, this one has come close but is only creating a single  instance of what its found e.g <name>REDACTED<name>   Current Transforms.conf [redact_xml] REGEX = <(.*)>[^<]*<\/\1> FORMAT = <$1>REDACTED<$1> DEST_KEY = _raw   Example, the log file might have:   <?xml version="1.0" encoding="UTF-8"?> <breakfast_menu> <food><name>Belgian Waffles</name><price>$5.95</price><description> Two of our famous Belgian Waffles with plenty of real maple syrup </description><calories>650</calories> </food> <food><name>Strawberry Belgian Waffles</name><price>$7.95</price><description>Light Belgian waffles covered with strawberries and whipped cream</description><calories>900</calories> </food> <food><name>Berry-Berry Belgian Waffles</name><price>$8.95</price><description>Belgian waffles covered with assorted fresh berries and whipped cream</description><calories>900</calories> </food> <food><name>French Toast</name><price>$4.50</price><description>Thick slices made from our homemade sourdough bread</description><calories>600</calories> </food> <food><name>Homestyle Breakfast</name><price>$6.95</price><description>Two eggs, bacon or sausage, toast, and our ever-popular hash browns</description><calories>950</calories> </food> </breakfast_menu>   And I want to push into splunk the redacted version   <?xml version="1.0" encoding="UTF-8"?> <breakfast_menu> <food><name>REDACTED<name><price>REDACTED<price><description>REDACTED<description><calories>REDACTED<calories> </food> <food><name>REDACTED<name><price>REDACTED<price><description>REDACTED<description><calories>REDACTED<calories> </food> <food><name>REDACTED<name><price>REDACTED<price><description>REDACTED<description><calories>REDACTED<calories> </food> <food><name>REDACTED<name><price>REDACTED<price><description>REDACTED<description><calories>REDACTED<calories> </food> <food><name>REDACTED<name><price>REDACTED<price><description>REDACTED<description><calories>REDACTED<calories> </food> </breakfast_menu>    
my field aliases are set like this: browser = BROWSER referrer = REFERRER req=REQ req_id=REQ=ID src=SRC During my search in the csv that includes these field aliases: I get all of the results i... See more...
my field aliases are set like this: browser = BROWSER referrer = REFERRER req=REQ req_id=REQ=ID src=SRC During my search in the csv that includes these field aliases: I get all of the results in my field except req=REQ. I only get req but not REQ. Why is that and how would I be able to fix this?   Thank you,   
hello In host.csv, I have 4 fields : HOSTNAME, SITE, DEPARTMENT, CATEGORY   [| inputlookup host_OnOff.csv | fields HOSTNAME SITE DEPARTMENT CATEGORY | rename HOSTNAME as host ] `OnOff` | stats l... See more...
hello In host.csv, I have 4 fields : HOSTNAME, SITE, DEPARTMENT, CATEGORY   [| inputlookup host_OnOff.csv | fields HOSTNAME SITE DEPARTMENT CATEGORY | rename HOSTNAME as host ] `OnOff` | stats latest(_time) as _time by host SITE CATEGORY DEPARTMENT    As you can see, I need to cross these fields with the host there is in `OnOff` in order to stats the value after But i have no values displayed what is wrong please?
I'm looking for help to filter my mstats data using eventtype OR tag I've created for groups of hosts.. Here's an example of my CPU metrics dashboard panel    | mstats avg(_value) as value w... See more...
I'm looking for help to filter my mstats data using eventtype OR tag I've created for groups of hosts.. Here's an example of my CPU metrics dashboard panel    | mstats avg(_value) as value where `nmon_metrics_index` metric_name=os.unix.nmon.cpu.cpu_all.Sys_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.User_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.Wait_PCT host=$host$ groupby metric_name, host span=1m | `def_cpu_load_percent` | timechart `nmon_span` avg(cpu_load_percent) AS cpu_load_percent by host useother=false     I've tried appending a non-metrics subsearch to search against the metric data using my tag AND host so that only the selected hosts return in my panel    index = example_index (eventtype=test1 OR eventtype=test2 OR eventtype=test3) | search (host=* AND tag = test2) | append [ | mstats avg(_value) as value where `nmon_metrics_index` metric_name=os.unix.nmon.cpu.cpu_all.Sys_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.User_PCT OR metric_name=os.unix.nmon.cpu.cpu_all.Wait_PCT host=dac51elo.pjm.com groupby metric_name, host span=1m | `def_cpu_load_percent` ] | timechart `nmon_span` avg(cpu_load_percent) AS cpu_load_percent by host useother=false