All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a dashboard that has a multi-select dropdown that contains a list of all database names. When the dashboard is first run, the token that would hold the database name if a selection ... See more...
Hello, I have a dashboard that has a multi-select dropdown that contains a list of all database names. When the dashboard is first run, the token that would hold the database name if a selection was made in the dropdown is set to * so all database events are read. Only the top 5 are returned. My query looks like this: index=whatever shard IN ("*")  | chart count as result by shard | sort -result | head 5 So say the display panel shows results for these databases. 229, 290, 112, 273, 242 I want to set the dropdown labelled Shards form token "form.shardToken" to the list of databases returned as well as updating the token shardToken with the same list of databases. Hopefully that all makes sense.     
Dear All, I have a requirement to parse the data correctly. I am getting merged events and wants separate events for the below events. Could someone help me what configuration needs to be changed a... See more...
Dear All, I have a requirement to parse the data correctly. I am getting merged events and wants separate events for the below events. Could someone help me what configuration needs to be changed and how can i learn regex. I need events to break from [22/05/11@08:13:58.246+0200] P-20316642 T-000001....Timeframe, P and T values can be different. Appreciate your help   [22/05/11@08:14:25.252+0200] P-37945744 T-000001 1 AS -- (Procedure: 'olb-stp-monitoring.r' Line:273) DML TRACE ERROR : use of refreshUsrRig , decomissioning ongoing [22/05/11@08:14:03.266+0200] P-29491506 T-000001 1 AS -- (Procedure: 'olb-stp-monitoring.r' Line:273) DML TRACE ERROR : use of refreshUsrRig , decomissioning ongoing [22/05/11@08:13:58.246+0200] P-20316642 T-000001 1 AS -- (Procedure: 'olb-stp-monitoring.r' Line:273) DML TRACE ERROR : use of refreshUsrRig , decomissioning ongoing
while searching with day span, Its working fine with multiple dates but creating issue when searching within day. It adding extra time span in with day like 2:00 AM, 6:00 AM etc.   Below are the ... See more...
while searching with day span, Its working fine with multiple dates but creating issue when searching within day. It adding extra time span in with day like 2:00 AM, 6:00 AM etc.   Below are the code snippet for the row. Have any solution for this?     <row> <panel> <title>API Count by Environment - Success</title> <chart> <search> <query>index="cust-*-wfd-api-gtw-ilb" "/v1/platform/change_indicators" (host="*$env$*") | search sourcetype="nginx:plus:access" | where like(status, "%2%%") |eval env = mvindex(split(host, "-"), 1) | timechart span=$timespan$ count(request) as TotalCount by env</query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.text">Time</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.text">Request Count</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">auto</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">area</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row>  
hello From the dropdown list below, I need to update search events with an eval case command     <input type="dropdown" token="debit" searchWhenChanged="true"> <label>Débit</label> ... See more...
hello From the dropdown list below, I need to update search events with an eval case command     <input type="dropdown" token="debit" searchWhenChanged="true"> <label>Débit</label> <choice value="2 Mb/s">2 Mb/s</choice> <choice value="4 Mb/s">4 Mb/s</choice> </input>        So I try something like this but it doesnt works       | eval debit="$debit$" | eval deb=case(debit=="2 Mb/s", site=="TOTO" OR site=="TITI", debit=="4 Mb/s", site=="TUTU" OR site=="TATA", 1==1,site) | table site deb       could you help please?
index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*zvkk*" AND message="*2022-05-09*" |fields message |rex fie... See more...
index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*zvkk*" AND message="*2022-05-09*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})" |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time     In above query  *******message="*2022-05-09*" ************** i would like to set this date search automatically , basically need to set alert for yesterdays date search 
Hi there  I am new to splunk and I am playing with some live data . my problem is that every time my daily limit for indexing exceeds over 500MB. Due to that, sometime I am not able to make some que... See more...
Hi there  I am new to splunk and I am playing with some live data . my problem is that every time my daily limit for indexing exceeds over 500MB. Due to that, sometime I am not able to make some query and every time I need to reinstall  the same which affect my capability and time consuming thing.I want to increase that daily limit so that I can perform on test data which is live...     Thanks in advanced
Is it possible to map one index to another index?
Splunk newbie here! My usecase is to 1. monitor AWS EC2 webserver metrics (how do I push cpu, iostat, other stats to splunk? I tried to install an app/addon. But the dashboards are empty. I need so... See more...
Splunk newbie here! My usecase is to 1. monitor AWS EC2 webserver metrics (how do I push cpu, iostat, other stats to splunk? I tried to install an app/addon. But the dashboards are empty. I need some help building the graphs, populating metrics.  2. integrate splunk with grafana. (I was able to successfuly connect splunk as a datasource but not sure how to build the dashboards in grafana for splunk data).   any advise/recommendations to accomplish this is appreciated. 
Hello, I completed a few UF based data ingestions and SPLUNK is getting events from those ingestions but have some issues with breaking event. I have 2 types of files: 1)   text files with header... See more...
Hello, I completed a few UF based data ingestions and SPLUNK is getting events from those ingestions but have some issues with breaking event. I have 2 types of files: 1)   text files with header and Pile Delimiters, 2) XML files In the case of text files, header info is showing up within the SPLUNK events, and also events are not breaking as expected at all, most of the cases, one SPLUNK event contains more than one source events In the case of XML files, info within one source file considers as one SPLUNK event, but it should be considered number of events based on the XML tag. Any thoughts/recommendations to resolve these issues would be highly appreciated. Thank you! props/input configuration files and source files are given below: For Text Files: props [ds:audit] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) HEADERFIELD_LINE_NUMBER=1 INDEXED_EXTRACTIONS=psv TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%Q%z TIMESTAMP_FIELDS=TimeStamp inputs [monitor:///opt/audit/DS/DS_EVENTS*.txt] sourcetype=ds:audit index=ds_test sample serID|UserType|System|EventType|EventId|Subject|SessionID|SrcAddr|EventStatus|ErrorMsg|TimeStamp|Additional Application Data |Device p22bb4r|TEST|DS|USER| VIEW_NODE |ELEMENT<843006481>|131e9d5b-e84e-567d-a6b1-775f58993f68|null|00||2022-06-14T09:01:55.001+0000||NA p22bbs1|TEST|DS|USER| FULL_SEARCH |ELEMENT<843006481>|121e7d5b-f84e-467d-a6b1-775f58993f68|null|00||2021-06-14T09:01:50.001+0000||NA p22bbw3|TEST|DS|USER| FULL_SEARCH | ELEMENT< 343982854>|5b8fb22e-eeed-4802-8b07-8559dbfe1e45|null|00||2021-06-14T08:54:08.054+0000||NA ts70sbr4|TEST|DS|USER|VIEW_NODE| ELEMENT< 35382854>|5b8fb22e-eeed-4802-8b07-8559dbfe1e45|null|00||2021-06-14T08:54:16.054+0000||NA ts70sbd3|TEST|DS|USER|FULL_SEARCH|ELEMENT<933982854>|5b8fb22e-eeed-4802-8b07-8559dbfe1e45|null|00||2021-06-14T08:53:54.053+0000||NA For XML Files: [secops:audit] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]*)<MODTRANSL> TIME_PREFIX=<TIMESTAMP> TIME_FORMAT=%Y%m%d%H%M%S MAX_TIMESTAMP_LOOKAHEAD=14 TRUNCATE=2500 Input [monitor:///opt/app/secops/logs/audit_secops_log*.XML] sourcetype=secops:audit index=secops_test Sample Data <?xml version="x.1" encoding="UTF-8"?><DSDATA><MODTRANSL><TIMESTAMP>20190621121321</TIMESTAMP><USERID>d23bsrb</USERID><USERTYPE>SECOPS</USERTYPE><SYSTEM>DS</SYSTEM><EVENTTYPE>ADMIN</EVENTTYPE><EVENTID>SYS</EVENTID><ID>0300001</ID><SRCADDR>10.210.135.108</SRCADDR><RETURNCODE>00</RETURNCODE><VARDATA> Initiated New Entity Status: AP</VARDATA></MODTRANSL><MODTRANSL><TIMESTAMP>20190621121416</TIMESTAMP><USERID> d23bsrb </USERID><USERTYPE>SECOPS</USERTYPE><SYSTEM>DSI</SYSTEM><EVENTTYPE>ADMIN</EVENTTYPE><EVENTID>SYS</EVENTID><ID>000000000</ID><SRCADDR>10.210.135.120</SRCADDR><RETURNCODE>00</RETURNCODE><VARDATA> Entity Status: Approved New Entity Status: TI</VARDATA></MODTRANSL><MODTRANSL><TIMESTAMP>20190621121809</TIMESTAMP><USERID>sj45yrs</USERID><USERTYPE>SECOPS</USERTYPE><SYSTEM>DSI</SYSTEM><EVENTTYPE>ADMIN</EVENTTYPE><EVENTID>DS_OPD</EVENTID><ID>2192345</ID><SRCADDR>10.212.25.19</SRCADDR><RETURNCODE>00</RETURNCODE><VARDATA> 43ded7433b314eb58d2307e9bc536bd3</VARDATA > <DURATION>124</DURATION> </MODTRANSL</DSDATA>
HI All, I have a question, How to create index using REST API in a index clustered environment? Version : Splunk Enterprise 8.x  
Hi, I have few alerts created which looks into failure rates of my services and I have put in a condition which says if the failure rate is > 10% AND number of failed request > 200 then trigger the a... See more...
Hi, I have few alerts created which looks into failure rates of my services and I have put in a condition which says if the failure rate is > 10% AND number of failed request > 200 then trigger the alert. This is really not the ideal way to do the monitoring. Is there a way in Splunk we can use the AI to detect anomalies or outliers over time? So basically if Splunk can detect a failure pattern and if that pattern is consistent then don't trigger an alert but if it goes beyond the threshold, only then trigger it? Can we do this kind of stuff in Splunk using in-built  ML or AI?
I am performing a lookup in a main search which returns earliest_event and latest_event timestamp values.  I would like to use these timestamp values as parameters for a subsearch.  The search would ... See more...
I am performing a lookup in a main search which returns earliest_event and latest_event timestamp values.  I would like to use these timestamp values as parameters for a subsearch.  The search would be similar to the following: index=foo ........... | lookup lookuptable.csv session_id OUTPUTNEW session_id, earliest_event, latest_event ........... | append [ search index=bar earliest=earliest_event latest=latest_event ...........] The time parameters for the subsearch are not being accepted, though. Is there a different way that this can be accomplished?
I am trying to create an alert based on stats count value...I want to alert if count is less than or greater than 500
Hi All, Some files has been deleted by someone from one of the  server, I need to investigate on that. We only know the host name but not sure which file is deleted or by whom. Can anyone tell me... See more...
Hi All, Some files has been deleted by someone from one of the  server, I need to investigate on that. We only know the host name but not sure which file is deleted or by whom. Can anyone tell me exact query I need to type in search head to fetch the logs from Splunk to identify if any files has been deleted from my server. I'm totally new to Splunk, kindly assist   Regards, Vipin
I'm currently building a query that reports the top 10 urls of the top 10 users. Although my current query works, I would like a cleaner look. Query:     index="zscaler" sourcetype="zscalern... See more...
I'm currently building a query that reports the top 10 urls of the top 10 users. Although my current query works, I would like a cleaner look. Query:     index="zscaler" sourcetype="zscalernss-web" appclass!=Enterprise user!=unknown | stats count by user, url | sort 0 user -count | streamstats count as standings by user | where standings < 11 | eventstats sum(count) as total by category | sort 0 -total user -count     The results look like this     user. url. count rank john.doe@example.com. example.com. 100. 1 john.doe@example.com. facebook.com. 99. 2 john.doe@example.com. twitter.com. 98. 3 john.doe@example.com. google.com. 97. 4 john.doe@example.com. splunk.com. 96. 5 jane.doe@example.com. example.com. 100. 1 jane.doe@example.com. facebook.com. 99. 2 jane.doe@example.com. twitter.com. 98. 3 jane.doe@example.com. google.com. 97. 4 jane.doe@example.com. splunk.com. 96. 5 and so forth I would like for i to look like this user. url. count john.doe@example.com. example.com. 100. facebook.com. 99. twitter.com. 98. google.com. 97. splunk.com. 96. user. url. count jane.doe@example.com. example.com. 100. facebook.com. 99. twitter.com. 98. google.com. 97. splunk.com. 96. and so forth        
Hi - I want to list API's and its latencies / response times and want to compare the latencies in a table like below, Explanation : The test is executed for 1 hour and each ramp is 15 min (1X to 4X... See more...
Hi - I want to list API's and its latencies / response times and want to compare the latencies in a table like below, Explanation : The test is executed for 1 hour and each ramp is 15 min (1X to 4X)  API 1X Load response time avg or p95 2X Load response time avg or p95 3X Load response time avg or p95 4X Load response time avg or p95 API1         API2           Current Query : host=somehost sourcetype=somesourcetype endpoint=* latency=* received | search *SOMESTRING* |timechart p95(latency) span=15m by endpoint |foreach *[|eval "<<FIELD.."=ROUND('<<FIELD>>',0)] this query works fine without any issue and its displaying results like this but results are not accurate as the response time of 2022-05-09 00:00:00 & 2022-05-09 00:15:00 overlap and this becomes 1X data. how can i exactly separate 1X to 4X if i have executed a test from 2022-05-09 13:00:00 - 14:00:00 PM  _time  API1  API2 API3 2022-05-09 00:00:00       2022-05-09 00:15:00       2022-05-09 00:30:00       2022-05-09 00:45:00      
Hi, how do I query ingestion in GB by each index instead of just the top 10?
I'm completely stuck here. I'm trying to extract the "Path" from a logfile with this format:     Time: 05/10/2022 11:26:53 Event: Traffic IP Address: xxxxxxxxxx Description: HOST PROCESS FOR W... See more...
I'm completely stuck here. I'm trying to extract the "Path" from a logfile with this format:     Time: 05/10/2022 11:26:53 Event: Traffic IP Address: xxxxxxxxxx Description: HOST PROCESS FOR WINDOWS SERVICES Path: C:\Windows\System32\svchost.exe Message: Blocked Incoming UDP - Source xxxxxxxxxx : (xxxx) Destination xxxxxxxxxx : (xxxxx) Matched Rule: Block all traffic     using this regex     ((Path:\s{1,2})(?<fwpath>.+))     It does exactly what I want when I use rex, it extracts the path as "fwpath". However, when I do it as a field extraction, it matches the rest of the log entry. Why is it behaving differently for these two?
Hey, I recently made a bar graph in Splunk that adds new data of the duration of the test. My only problem with this graph is that after 13 entries, it starts adding data from the beginning of the g... See more...
Hey, I recently made a bar graph in Splunk that adds new data of the duration of the test. My only problem with this graph is that after 13 entries, it starts adding data from the beginning of the graph.    So for example, you can see after 28th april, new data starts getting added from the beginning of the bar graph. Please refer to the diagram below. I want the new data to continuously get added at the end of the graph and erase the old data in the beginning of the graph. How can I accomplish this? The query I am using for the existing search is below: index="aws_dev" | eval st=strptime(startTime, "%Y-%m-%dT%H:%M:%S.%3N%Z"), et=strptime(endTime, "%Y-%m-%dT%H:%M:%S.%3N%Z") | eval st=mvindex(st,0) | eval et=mvindex(et,0) | eval diff = et - st | eval date_wday=lower(strftime(_time,"%A"))|eval date_w=strftime(_time,"%d-%b-%y %a %H:%M:%S") |where NOT (date_wday = "sunday" OR date_wday = "saturday") | chart values(diff) by date_w  
Hi Team, I have two log sources ,say x and y. For x we need to extract a field x1 and then for each x1 we need to take last six digit and search the logs from source y and we need to extract a fiel... See more...
Hi Team, I have two log sources ,say x and y. For x we need to extract a field x1 and then for each x1 we need to take last six digit and search the logs from source y and we need to extract a field y1. After this,we need to plot x1 vs y1.. and we need to find out x1 for which y1 is present and x1 for which y1 is not present.   Logically we need to showcase end to end journey of a transaction,where we have two different sources on same server.