All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes...this is my 1st deployment of this node.  I installed the software on a linux VM and at a minimum I would think it would be listening and waiting for data via port 9997.  It's definitely connect... See more...
Yes...this is my 1st deployment of this node.  I installed the software on a linux VM and at a minimum I would think it would be listening and waiting for data via port 9997.  It's definitely connecting to the cloud on that port.  I don't see anything in the edge.log file that would indicate why it's not listening on that port.  I do see the following but not sure what it may be referring to: "message":"current settings have previously caused failures. aborting update","type":"provided","status":"failed"},{"time":"2024-10-21T16:16:37.959Z","settings_id":"3080980952365928851","type":"telemetry","status":"running"}]}}
I would suggested a stacked bar chart and leave min/max/curr/curr-1/curr-2 as chart overlays but I don't know if that would solve your problem. Stack the below-min(white) / between_max_min(shaded) /... See more...
I would suggested a stacked bar chart and leave min/max/curr/curr-1/curr-2 as chart overlays but I don't know if that would solve your problem. Stack the below-min(white) / between_max_min(shaded) / above-min(white).  Calculate the above min as some percentage above overall max value ie. overall_max=max(all_numbers)x1.25 It's the only way I can think to get the below min value as white - but I think that also violates some of the other things you were asking for.
Try something like this (although you will have to tweak it to get the size you want) | eventstats values(hdr_mid) as msgid by qid | stats values(from) as sender, values(to) as recipient values(subj... See more...
Try something like this (although you will have to tweak it to get the size you want) | eventstats values(hdr_mid) as msgid by qid | stats values(from) as sender, values(to) as recipient values(subject) as subject values(size) as size by msgid
Ideally it should clear right away, however if not try manually electing a new SH captain and wait 5-10 minutes for the SH bundle to replicate.
I want to be able to change the color of a text input border when you focus on the input box.  I want to change the blue border to red when the field is empty.  I have the javascript logic but not th... See more...
I want to be able to change the color of a text input border when you focus on the input box.  I want to change the blue border to red when the field is empty.  I have the javascript logic but not the css that would change the blue border.  Here is the css I have so far but all it does is put a border around the whole input panel, not just the text box. .required button{ border: 2px solid #f6685e !important; }      
Glad you saw sense and ditched chatGPT! Try something like this index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D)... See more...
Glad you saw sense and ditched chatGPT! Try something like this index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | eventstats min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName | stats count as TotalReq, by ApiName, Min, Max, Avg, P95, P98, P99, S | eval {S}=TotalReq | stats values(1*) as 1* values(2*) as 2* values(3*) as 3* values(4*) as 4* values(5*) as 5* sum(TotalReq) as TotalReq by ApiName, Min, Max, Avg, P95, P98, P99 | addtotals labelfield=ApiName col=t label="ColumnTotals" 1* 2* 3* 4* 5* TotalReq | addinfo | eval Availability% = round(100 - ('500'*100/TotalReq),8) | fillnull value=100 Availability% | eval range = info_max_time - info_min_time | eval AvgTPS=round(TotalReq/range,5) | eval Avg=floor(Avg) | eval P95=floor(P95) | eval P98=floor(P98) | eval P99=floor(P99) | sort TotalReq | table ApiName, 1*, 2*, 3*, 4*, 5*, Min, Max, Avg, P95, P98, P99, AvgTPS, Availability%, TotalReq
Thanks @tscroggins , for your answer. But i still have some big problem with javascript, because sometimes Splunk web did not load the code, sometimes  the code didn't work and some times works with... See more...
Thanks @tscroggins , for your answer. But i still have some big problem with javascript, because sometimes Splunk web did not load the code, sometimes  the code didn't work and some times works without changing it. I have red the documentation, i use the refresh button (http://<ip:port>/debug/refresh), i'm using my browser in incognito mode but nothing, i restar splunk web when i add a new js file. I even tried to fully restart splunk       Thank for your help
I believe I figured out what was wrong.  Turns out our admin forgot to point the newly installed SH cluster member to the license manager.  It is now pointing to the LM. How long does it take for the... See more...
I believe I figured out what was wrong.  Turns out our admin forgot to point the newly installed SH cluster member to the license manager.  It is now pointing to the LM. How long does it take for the lit search error to clear? 
Hello, I have a question about customising my time picker. I'd like to display two panels, one for 24 hours and one for 1 month. And I'd like panel 1 to be displayed when the teps selected is 24h... See more...
Hello, I have a question about customising my time picker. I'd like to display two panels, one for 24 hours and one for 1 month. And I'd like panel 1 to be displayed when the teps selected is 24h, and the second panel to be displayed when the time picker is for the current month.   I tried this, but it doesn't work : <form version="1.1" theme="light"> <label>dev_vwt_dashboards_uc47</label> <init> <set token="time_range">-24h@h</set> <set token="date_connection">*</set> <set token="time_connection">*</set> <set token="IPAddress">*</set> <set token="User">*</set> <set token="AccessValidation">*</set> </init> <!--fieldset autoRun="false" submitButton="true"> <input type="time" token="field1" searchWhenChanged="true"> <label>Period</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset--> <fieldset autoRun="false" submitButton="true"> <input type="dropdown" token="time_range" searchWhenChanged="true"> <label>Select Time Range</label> <choice value="-24h@h">Last 24 hours</choice> <!--choice value="@mon">Since Beginning of Month</choice--> <default>Last 24 hours</default> <!--change> <condition value="-24h@h"> <set token="tokShowPanel1">true</set> <unset token="tokShowPanel2"></unset> </condition> <condition value="@mon"> <unset token="tokShowPanel1"></unset> <set token="tokShowPanel2">true</set> </condition> </change--> </input> </fieldset> <row> <panel> <input type="text" token="date_connection" searchWhenChanged="true"> <label>date_connection</label> <default>*</default> <prefix>date_connection="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="time_connection" searchWhenChanged="true"> <label>time_connection</label> <default>*</default> <prefix>time_connection="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="IPAddress" searchWhenChanged="true"> <label>IPAddress</label> <default>*</default> <prefix>IPAddress="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="text" token="User" searchWhenChanged="true"> <label>User</label> <default>*</default> <prefix>User="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input> <input type="dropdown" token="AccessValidation" searchWhenChanged="true"> <label>AccessValidation</label> <default>*</default> <prefix>AccessValidation="</prefix> <suffix>"</suffix> <initialValue>*</initialValue> <choice value="*">All</choice> <choice value="failure">failure</choice> <choice value="success">success</choice> <choice value="denied">denied</choice> </input> </panel> </row> <row> <panel id="AD_Users_Authentication_last_24_hours" depends="$tokShowPanel1$"> <title>AD Users Authentication</title> <table> <search> <query>|loadjob savedsearch="anissa.bannak.ext@abc.com:search:dev_vwt_saved_search_uc47_AD_Authentication_Result" |rename UserAccountName as "User" |search $date_connection$ $time_connection$ $IPAddress$ $User$ $AccessValidation$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">100</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Last Connection Status"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> <format type="number" field="AuthenticationResult"></format> <format type="color" field="AuthenticationResult"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access_Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="AccessValidation"> <colorPalette type="map">{"success":#118832,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="last_connection_status"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> </table> </panel> </row> <row> <panel id="AD_Users_Authentication_1_month" depends="$tokShowPanel2$"> <title>AD Users Authentication</title> <table> <search> <query>|loadjob savedsearch="anissa.bannak.ext@abc.com:search:dev_vwt_saved_search_uc47_AD_Authentication_Result" |rename UserAccountName as "User" |search $date_connection$ $time_connection$ $IPAddress$ $User$ $AccessValidation$</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Last Connection Status"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> <format type="number" field="AuthenticationResult"></format> <format type="color" field="AuthenticationResult"> <colorPalette type="map">{"failure":#D94E17,"success":#55C169}</colorPalette> </format> <format type="color" field="Access_Validation"> <colorPalette type="map">{"success":#55C169,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="AccessValidation"> <colorPalette type="map">{"success":#118832,"failure":#D41F1F}</colorPalette> </format> <format type="color" field="last_connection_status"> <colorPalette type="map">{"success":#55C169,"failure":#D94E17}</colorPalette> </format> </table> </panel> </row> </form>
Hi, I am trying to tie multiple events describing single transaction together. This is my test example:   Event   Oct 21 08:19:42 host.company.com 2024-10-21T13:19:42.391606+00:00 hos... See more...
Hi, I am trying to tie multiple events describing single transaction together. This is my test example:   Event   Oct 21 08:19:42 host.company.com 2024-10-21T13:19:42.391606+00:00 host sendmail[8920]: 49L2pZMi015103: to=recipient@company.com, delay=00:00:01, xdelay=00:00:01, mailer=esmtp, tls_verify=NONE, tls_version=NONE, cipher=NONE, pri=261675, relay=host.company.com. [X.X.X.X], dsn=2.6.0, stat=Sent (105f7c9d-76a2-a595-e329-617f87ba2602@company.com [InternalId=19267223300036, Hostname=HOSTNAME.company.com] 145203 bytes in 0.663, 213.865 KB/sec Queued mail for delivery)   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.715034+00:00 host filter_instance1[31332]: rprt s=42cu1tbqet m=1 x=42cu1tbqet-1 mod=mail cmd=msg module= rule= action=continue attachments=4 rcpts=1 routes=allow_relay,default_inbound,internalnet size=143489 guid=jb9XbZ5Gez432DgKTDz22jNgntXrF6xb hdr_mid=105f7c9d-76a2-a595-e329-617f87ba2602@company.com qid=49L2pZMi015103 hops-ip=Y.Y.Y.Y subject="Your Weekly  Insights" duration=0.095 elapsed=0.353   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.714759+00:00 usdfwppserai1 filter_instance1[31332]: rprt s=42cu1tbqet m=1 x=42cu1tbqet-1 cmd=send profile=mail qid=49L2pZMi015103 rcpts=recipient@company.com   Oct 21 08:19:41 host.company.com 2024-10-21T13:19:41.675365+00:00 host sendmail[15103]: 49L2pZMi015103: from=sender@company.com, size=141675, class=0, nrcpts=1, msgid=105f7c9d-76a2-a595-e329-617f87ba2602@company.com, proto=ESMTP, daemon=MTA, tls_verify=NONE, tls_version=NONE, cipher=NONE, auth=NONE, relay=host.company.com [Z.Z.Z.Z]   I can extract message id (105f7c9d-76a2-a595-e329-617f87ba2602@company.com) and qid (49L2pZMi015103) from the topmost message and tie it this way to the bottom one, but this is only two events out of series of four.  How would I generate complete view of all four events? I am looking to get sender and recipient SMTP addresses, subject and message sizes from top and bottom event. Any help would be greatly appreciated.
Hello @FPERVIL  looks like there its not listening on 9997, may be in issue during the start up of EP. Did you already deploy a pipeline? Have you tried to check edge.log to verify if there are sp... See more...
Hello @FPERVIL  looks like there its not listening on 9997, may be in issue during the start up of EP. Did you already deploy a pipeline? Have you tried to check edge.log to verify if there are specific errors.?      
Hi @gcusello   How can we select white for the min area in area chart.  Which option to select ??       
So, I just figured out that customer do not connect any DB to Splunk using DBConnect. They just use Database native log -> Universal forwarder -> Splunk. Your doubt is valid why not use directly ... See more...
So, I just figured out that customer do not connect any DB to Splunk using DBConnect. They just use Database native log -> Universal forwarder -> Splunk. Your doubt is valid why not use directly from native but now that I see they do not connect their DB to Splunk and just forward the logs, they can configure UF in such a way it can send us logs along with Splunk.
Hello @rrovers Did you already try "display" layout options ? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/layoutConfigOptions For font sizes:  https://docs.splunk.com/Documenta... See more...
Hello @rrovers Did you already try "display" layout options ? https://docs.splunk.com/Documentation/Splunk/9.3.1/DashStudio/layoutConfigOptions For font sizes:  https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/DashStudio/chartsTable   Hope this helps.
I recently installed a Splunk Edge Processor and i noticed it's not listening on port 9997.  I can see it as a node on the Splunk Cloud Platform but I can't send on-prem data from my universal forwar... See more...
I recently installed a Splunk Edge Processor and i noticed it's not listening on port 9997.  I can see it as a node on the Splunk Cloud Platform but I can't send on-prem data from my universal forwarders to it because it's not listening to port 9997.   When I check the ports that it's currently listening to, here are the results: ss -tunlp Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 0.0.0.0:44628 0.0.0.0:* udp UNCONN 0 0 0.0.0.0:161 0.0.0.0:* udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:37139 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=7)) tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:8888 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=8)) tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* users:(("splunkd",pid=983,fd=4)) tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* tcp LISTEN 0 128 127.0.0.1:44001 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:43335 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=3)) tcp LISTEN 0 128 127.0.0.1:199 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:1777 0.0.0.0:* users:(("edge_linux_amd6",pid=28942,fd=11)) tcp LISTEN 0 2048 192.168.66.120:10001 0.0.0.0:* tcp LISTEN 0 2048 127.0.0.1:10001 0.0.0.0:* As you can see, 9997 is not in there.  I confirmed the shared settings for this node to make sure that it's expected to receive data on that port:   Splunk forwarders The Edge Processor settings for receiving data from universal or heavy forwarders. Port 9997 Maximum channels The number of channels that all Edge Processors can use to receive data from Splunk forwarders. The number of channels that all Edge Processors can use to receive data from Splunk forwarders. 300   Any clues as to why this is happening?
What is your Splunk configuration to listen for UDP 5514?
Hello @rrovers, The layout in the dashboard studio can be set to either absolute or grid. However, there is currently no option to set dynamic height and width of the table based on number of rows. ... See more...
Hello @rrovers, The layout in the dashboard studio can be set to either absolute or grid. However, there is currently no option to set dynamic height and width of the table based on number of rows. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!! 
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple me... See more...
Hi everyone, I'm working on a Splunk query to analyze API request metrics, and I want to avoid using a join as it is making my query slow. The main challenge is that I need to aggregate multiple metrics (like min, max, avg, and percentiles) and pivot HTTP status codes (S) into columns, but the current approach withxyseries is dropping additional values: Min, Max, Avg, P95, P98, P99 The reason why using xyseries - it generates columns dynamically so that my result will contain only available statuses from many available and it count accordingly . Here’s the original working query with join: index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats count as TotalReq, by ApiName, S | xyseries ApiName S, TotalReq | addtotals labelfield=ApiName col=t label="ColumnTotals" fieldname="TotalReq" | join type=left ApiName [ search index=sample_index sourcetype=kube:container:sample_container | fields U, S, D | where isnotnull(U) and isnotnull(S) and isnotnull(D) | rex field=U "(?P<ApiName>[^/]+)(?=\/[0-9a-fA-F\-]+$|$)" | stats min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName] | addinfo | eval Availability% = round(100 - ('500'*100/TotalReq), ‌ ‌ | fillnull value=100 Availability% | eval range = info_max_time - info_min_time | eval AvgTPS=round(TotalReq/range,5) | eval Avg=floor(Avg) | eval P95=floor(P95) | eval P98=floor(P98) | eval P99=floor(P99) | sort TotalReq | table ApiName, 1*, 2*, 3*, 4*, 5*, Min, Max, Avg, P95, P98, P99, AvgTPS, Availability%, TotalReq I attempted to optimize it by combining the metrics calculation into a single stats command and usingeventstats or streamstats to calculate the additional statistics without dropping the required fields.  Also providing additional metrics with xyseries as below but did not help. PS: Tried with chatGPT did not help. so seeking help from real experts   | stats count as TotalReq, min(D) as Min, max(D) as Max, avg(D) as Avg, perc95(D) as P95, perc98(D) as P98, perc99(D) as P99 by ApiName, S | xyseries ApiName S, TotalReq, Min, Max, Avg, P95, P98, P99
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event th... See more...
I have a playbook setup to run on all events in a 10minute_timer label using the Timer app. These events do not contain artifacts. I've noticed the playbook runs fine when testing on a test_event that contains an artifact. When I moved it over to run on the timer label it dies when it gets to my filter block. I've also run the exact same playbook on an event in my test_label which also didn't contain an artifact and that too fails. I've tested it without the filter block and used a decision instead, that works fine. Both blocks share the same Scope in the Advanced settings drop down. My conditions are fine in the filter block and should evaluate to True, I added a test condition on the label name to make sure of this and even that is not triggering.  I think this may be a bug, I'm open to being wrong but not sure what else I can do to test it.    Thanks I believe this is a bug with SOAR. 
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard beca... See more...
We use splunk for creating reports. When I insert a table in dashboard studio I have to define a width and height for it. But the height should be different for each period we run the dashboard because the number of rows can be different per period. How can I do this without changing the layout every month?