All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I have a question about customising my time picker. I'd like to display two panels, one for 24 hours and one for 1 month. And I'd like panel 1 to be displayed when the teps selected is 24h... See more...
Hello, I have a question about customising my time picker. I'd like to display two panels, one for 24 hours and one for 1 month. And I'd like panel 1 to be displayed when the teps selected is 24h, and the second panel to be displayed when the time picker is for the current month.   I tried this, but it doesn't work : <form version="1.1" theme="light"> <label>dev_vwt_dashboards_uc47</label> <init> <set token="time_range">-24h@h</set> <set token="date_connection">*</set> <set token="time_connection">*</set> <set token="IPAddress">*</set> <set token="User">*</set> <set token="AccessValidation">*</set> </init> <!--fieldset autoRun="false" submitButton="true"> <input type="time" token="field1" searchWhenChanged="true"> <label>Period</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset--> <fieldset autoRun="false" submitButton="true"> <input type="dropdown" token="time_range" searchWhenChanged="true"> <label>Select Time Range</label> <choice value="-24h@h">Last 24 hours</choice> <!--choice value="@mon">Since Beginning of Month</choice--> <default>Last 24 hours</default> <!--change> <condition value="-24h@h"> <set token="tokShowPanel1">true</set> <unset token="tokShowPanel2"></unset> </condition> <condition value="@mon"> <unset token="tokShowPanel1"></unset> <set token="tokShowPanel2">true</set> </condition> </change--> </input> </fieldset> <row> <panel> <input type="text" token="date_connection" searchWhenChanged="true"> <label>date_connection</label> <default>*</default> <prefix>date_connection="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="time_connection" searchWhenChanged="true"> <label>time_connection</label> <default>*</default> <prefix>time_connection="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="IPAddress" searchWhenChanged="true"> <label>IPAddress</label> <default>*</default> <prefix>IPAddress="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="User" searchWhenChanged="true"> <label>User</label> <default>*</default> <prefix>User="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="dropdown" token="AccessValidation" searchWhenChanged="true"> <label>AccessValidation</label> <default>*</default> <prefix>AccessValidation="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> <choice value="*">All</choice> <choice value="failure">failure</choice> <choice value="success">success</choice> <choice value="denied">denied</choice> </input> </panel> </row> <row> <panel id="AD_Users_Authentication_last_24_hours" depends="$tokShowPanel1$"> <title>AD Users Authentication</title> <table> <search> <query>|loadjob savedsearch="anissa.bannak.ext@abc.com:search:dev_vwt_saved_search_uc47_AD_Authentication_Result" |rename UserAccountName as "User" |search $date_connection$ $time_connection$ $IPAddress$ $User$ $AccessValidation$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">100</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Last Connection Status"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> <format type="number" field="AuthenticationResult"></format> <format type="color" field="AuthenticationResult"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access_Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="AccessValidation"> <colorPalette type="map">{"success":#118832,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="last_connection_status"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> </table> </panel> </row> <row> <panel id="AD_Users_Authentication_1_month" depends="$tokShowPanel2$"> <title>AD Users Authentication</title> <table> <search> <query>|loadjob savedsearch="anissa.bannak.ext@abc.com:search:dev_vwt_saved_search_uc47_AD_Authentication_Result" |rename UserAccountName as "User" |search $date_connection$ $time_connection$ $IPAddress$ $User$ $AccessValidation$</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Last Connection Status"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> <format type="number" field="AuthenticationResult"></format> <format type="color" field="AuthenticationResult"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access_Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="AccessValidation"> <colorPalette type="map">{"success":#118832,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="last_connection_status"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> </table> </panel> </row> </form>
Hi, I am trying to tie multiple events describing single transaction together. This is my test example:   Event   Oct 21 08:19:42 host.company.com 2024-10-21T13:19:42.391606+00:00 hos... See more...
Hi, I am trying to tie multiple events describing single transaction together. This is my test example:   Event   Oct 21 08:19:42 host.company.com 2024-10-21T13:19:42.391606+00:00 host sendmail[8920]: 49L2pZMi015103: to=recipient@company.com, delay=00:00:01, xdelay=00:00:01, mailer=esmtp, tls_verify=NONE, tls_version=NONE, cipher=NONE, pri=261675, relay=host.company.com. [X.X.X.X], dsn=2.6.0, stat=Sent (105f7c9d-76a2-a595-e329-617f87ba2602@company.com [InternalId=19267223300036, Hostname=HOSTNAME.company.com] 145203 bytes in 0.663, 213.865 KB/sec Queued mail for delivery)   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.715034+00:00 host filter_instance1[31332]: rprt s=42cu1tbqet m=1 x=42cu1tbqet-1 mod=mail cmd=msg module= rule= action=continue attachments=4 rcpts=1 routes=allow_relay,default_inbound,internalnet size=143489 guid=jb9XbZ5Gez432DgKTDz22jNgntXrF6xb hdr_mid=105f7c9d-76a2-a595-e329-617f87ba2602@company.com qid=49L2pZMi015103 hops-ip=Y.Y.Y.Y subject="Your Weekly  Insights" duration=0.095 elapsed=0.353   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.714759+00:00 usdfwppserai1 filter_instance1[31332]: rprt s=42cu1tbqet m=1 x=42cu1tbqet-1 cmd=send profile=mail qid=49L2pZMi015103 rcpts=recipient@company.com   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.675365+00:00 host sendmail[15103]: 49L2pZMi015103: from=sender@company.com, size=141675, class=0, nrcpts=1, msgid=105f7c9d-76a2-a595-e329-617f87ba2602@company.com, proto=ESMTP, daemon=MTA, tls_verify=NONE, tls_version=NONE, cipher=NONE, auth=NONE, relay=host.company.com [Z.Z.Z.Z]   I can extract message id (105f7c9d-76a2-a595-e329-617f87ba2602@company.com) and qid (49L2pZMi015103) from the topmost message and tie it this way to the bottom one, but this is only two events out of series of four.  How would I generate complete view of all four events? I am looking to get sender and recipient SMTP addresses, subject and message sizes from top and bottom event. Any help would be greatly appreciated.
Hello @FPERVIL  looks like there its not listening on 9997, may be in issue during the start up of EP. Did you already deploy a pipeline? Have you tried to check edge.log to verify if there are sp... See more...
Hello @FPERVIL  looks like there its not listening on 9997, may be in issue during the start up of EP. Did you already deploy a pipeline? Have you tried to check edge.log to verify if there are specific errors.?      
Hi @gcusello   How can we select white for the min area in area chart.  Which option to select ??       
So, I just figured out that customer do not connect any DB to Splunk using DBConnect. They just use Database native log -> Universal forwarder -> Splunk. Your doubt is valid why not use directly ... See more...
So, I just figured out that customer do not connect any DB to Splunk using DBConnect. They just use Database native log -> Universal forwarder -> Splunk. Your doubt is valid why not use directly from native but now that I see they do not connect their DB to Splunk and just forward the logs, they can configure UF in such a way it can send us logs along with Splunk.
Hello @rrovers Did you already try "display" layout options ? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/layoutConfigOptions For font sizes:  https://docs.splunk.com/Documenta... See more...
Hello @rrovers Did you already try "display" layout options ? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/layoutConfigOptions For font sizes:  https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/DashStudio/chartsTable   Hope this helps.
I recently installed a Splunk Edge Processor and i noticed it's not listening on port 9997.  I can see it as a node on the Splunk Cloud Platform but I can't send on-prem data from my universal forwar... See more...
I recently installed a Splunk Edge Processor and i noticed it's not listening on port 9997.  I can see it as a node on the Splunk Cloud Platform but I can't send on-prem data from my universal forwarders to it because it's not listening to port 9997.   When I check the ports that it's currently listening to, here are the results: ss -tunlp Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 0.0.0.0:44628 0.0.0.0:* udp UNCONN 0 0 0.0.0.0:161 0.0.0.0:* udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:37139 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=7)) tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:8888 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=8)) tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* users:(("splunkd",pid=983,fd=4)) tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* tcp LISTEN 0 128 127.0.0.1:44001 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:43335 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=3)) tcp LISTEN 0 128 127.0.0.1:199 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:1777 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=11)) tcp LISTEN 0 2048 192.168.66.120:10001 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:10001 0.0.0.0:* As you can see, 9997 is not in there.  I confirmed the shared settings for this node to make sure that it's expected to receive data on that port:   Splunk forwarders The Edge Processor settings for receiving data from universal or heavy forwarders. Port 9997 Maximum channels The number of channels that all Edge Processors can use to receive data from Splunk forwarders. The number of channels that all Edge Processors can use to receive data from Splunk forwarders. 300   Any clues as to why this is happening?
What is your Splunk configuration to listen for UDP 5514?
Hello @rrovers, The layout in the dashboard studio can be set to either absolute or grid. However, there is currently no option to set dynamic height and width of the table based on number of rows. ... See more...
Hello @rrovers, The layout in the dashboard studio can be set to either absolute or grid. However, there is currently no option to set dynamic height and width of the table based on number of rows. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!! 
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple me... See more...
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple metrics (like min, max, avg, and percentiles) and pivot HTTP status codes (S) into columns, but the current approach withxyseries is dropping additional values: Min, Max, Avg, P95, P98, P99 The reason why using xyseries - it generates columns dynamically so that my result will contain only available statuses from many available and it count accordingly . Here’s the original working query with join: index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats count as TotalReq, by ApiName, S | xyseries ApiName S, TotalReq | addtotals labelfield=ApiName col=t label="ColumnTotals" fieldname="TotalReq" | join type=left ApiName [ search index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName] | addinfo | eval Availability% = round(100 - ('500'*100/TotalReq), ‌ ‌ | fillnull value=100 Availability% | eval range = info_max_time - info_min_time | eval AvgTPS=round(TotalReq/range,5) | eval Avg=floor(Avg) | eval P95=floor(P95) | eval P98=floor(P98) | eval P99=floor(P99) | sort TotalReq | table ApiName, 1*, 2*, 3*, 4*, 5*, Min, Max, Avg, P95, P98, P99, AvgTPS, Availability%, TotalReq I attempted to optimize it by combining the metrics calculation into a single stats command and usingeventstats or streamstats to calculate the additional statistics without dropping the required fields.  Also providing additional metrics with xyseries as below but did not help. PS: Tried with chatGPT did not help. so seeking help from real experts   | stats count as TotalReq, min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName, S | xyseries ApiName S, TotalReq, Min, Max, Avg, P95, P98, P99
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event th... See more...
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event that contains an artifact. When I moved it over to run on the timer label it dies when it gets to my filter block. I've also run the exact same playbook on an event in my test_label which also didn't contain an artifact and that too fails. I've tested it without the filter block and used a decision instead, that works fine. Both blocks share the same Scope in the Advanced settings drop down. My conditions are fine in the filter block and should evaluate to True, I added a test condition on the label name to make sure of this and even that is not triggering.  I think this may be a bug, I'm open to being wrong but not sure what else I can do to test it.    Thanks I believe this is a bug with SOAR. 
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard beca... See more...
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard because the number of rows can be different per period. How can I do this without changing the layout every month?
I found the problem, when Splunk was installed it got installed as a heavy forwarder., so it was looking for the next indexer.     I deleted outputs.conf,  restarted Splunk and it started working.
I did connect MySQL to Splunk using DBConnect but not on the Universal Forwarder, i do not know how I can connect a DB on UF. Also, I am still figuring out how I can send the audit logs for connected... See more...
I did connect MySQL to Splunk using DBConnect but not on the Universal Forwarder, i do not know how I can connect a DB on UF. Also, I am still figuring out how I can send the audit logs for connected DB using Universal Forwarder
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP... See more...
We have different lookup inputs into the Splunk ES asset list framework. Some values for assets change over time, for example due to DHCP og DNS renaming. When an asset gets a new IP due to e.g. DHCP, the lookup used as input into the asset framework is updated accordingly, but the merged asset lookup "asset_lookup_by_str" will contain both the new and the old IP. So the new IP is appended on the asset, it's not replacing the old IP. Due to "merge magic" that runs under the hood in the asset framework, over time this creates strange assets with many DNS names and many IPs. My question is, how long are asset list field values stored in the Splunk ES asset list framework? Are there any hidden values that keep track of say an IP, and will Splunk eventually remove the IP from the asset in the merged list? Or will the IP stay there forever, and these "multivalue assets" will thus just grow with more and more DNS names and IPs until the mv field limits are reached? And, if I reduce the asset list mv field limits, how does Splunk prioritize what values will be included or not? Does the values already on the merged list have priority, or does any new values have priority? Tried looking for answers in the documentation but could not find answers on my questions there. Hoping someone will share some insights here. Thanks!
Hello, it was not confirmed previously, but it appeared unlikely at the time. Previously, the issue persisted even after I changed the schedule from 2 7 * * * to 2,27 7 * * * and later even 2 7,19 *... See more...
Hello, it was not confirmed previously, but it appeared unlikely at the time. Previously, the issue persisted even after I changed the schedule from 2 7 * * * to 2,27 7 * * * and later even 2 7,19 * * * which required UF restarts at different times of day. While time sync does occur it doesn't occur often enough to have affected all of these attempts. Today, I double checked one of the systems more consistently affected (index=<WindowsLogs> host=<REDACT> EventCode=4616 4616 NewTime) and found a time synchronization did not occur around the time the issue manifested especially at the time of a UF service restart.
I have setup splunk, the machine has 15:26 as local time, but when I check splunkd.log time it is 20:26.   why is there a difference in time b/w local time and splunkd.log time?
You have too many searches try to run at the same time.  That means some searches have to wait (are delayed) until a search slot becomes available.  Use the Scheduled Searches dashboard in the Cloud ... See more...
You have too many searches try to run at the same time.  That means some searches have to wait (are delayed) until a search slot becomes available.  Use the Scheduled Searches dashboard in the Cloud Monitoring Console to see which times have the most delays and reschedule some of the searches that run at those times.
To refer to a field in an event, use single quotes around the field name.  Dollar signs refer to tokens, which are not part of an event. | `filter_maintenance_services('fields.ServiceID')`
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a applicatio... See more...
Hi, I am a rookie in SPL and I have this general correlation search for application events: index="foo" sourcetype="bar" (fields.A="something" "fields.B"="something else") If this was a application specific search I could just specify the service in the search. But what I want to achieve is to use a service id from event rather than a fixed value to suppress results for that specific service. If I append  | `filter_maintenance_services("e5095542-9132-402f-8f17-242b83710b66")` to the search it works but if I use the event data service id it does not. Ex.  | `filter_maintenance_services($fields.ServiceID$)` I suspect that it has to do with  fields.ServiceID not being populated when the filter is deployed. How can get this to work?