All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @siva_kumar0147, No, I only use makeresults to generate sample data. The logic from the sort command down drives the visualization.
During upgrade of our Splunk Ent. (production) 9.2.4 to 9.30 - throws an error: not found SSLEAY32.dll (+libeay32.dll) Nb. Splunk is installed on drive "d:\program files\Splunk   Rebooted our W... See more...
During upgrade of our Splunk Ent. (production) 9.2.4 to 9.30 - throws an error: not found SSLEAY32.dll (+libeay32.dll) Nb. Splunk is installed on drive "d:\program files\Splunk   Rebooted our Windows 2019 server and tried again, but with the same result. Yes, I found a SSLEAY32 and LIB32 file in the folder "D:\Program Files\splunk\bin"!? I have no idea what to do now and I am very reluctant to experiment further - although I have found similar problems on the internet, not specific related to Splunk. Does anyone have a tip or a suggestion for me waht to do next? For example: Can I skip 9.3.0 and continue with 9.3.1 or 9.3.2? Thanks for all the responses AshleyP
Hi @poojak2579 , the Submit button was created to run the dashboard search not to accept a value to store in a lookup. If you want to accect a value and store it in a lookup, you have to use an HTM... See more...
Hi @poojak2579 , the Submit button was created to run the dashboard search not to accept a value to store in a lookup. If you want to accect a value and store it in a lookup, you have to use an HTML button and an external JS containing the outputlookup search. This is a sample of this I shared some time ago: https://community.splunk.com/t5/Dashboards-Visualizations/Dynamically-Update-a-lookup-file-on-click-of-a-field-and-showing/m-p/674605 Ciao. Giuseppe
Now this is a well-defined problem.  As you suspected, you will have to manipulate data one way or another  if you want this chart format.  So here is one option   index="audit" sourcetype="signin"... See more...
Now this is a well-defined problem.  As you suspected, you will have to manipulate data one way or another  if you want this chart format.  So here is one option   index="audit" sourcetype="signin" userPrincipalName="*domain.com" status.errorCode=0 | rename "deviceDetail.isCompliant" as DeviceCompliance | stats count latest(createdDateTime) as lastLogin by userPrincipalName DeviceCompliance | eventstats max(lastLogin) as lastLogin by userPrincipalName | tojson userPrincipalName lastLogin | chart sum(count) as count over _raw by DeviceCompliance | fillnull true false | eval total=true + false | rename true as compliant | eval percent=((compliant/total)*100) | spath | table userPrincipalName compliant total percent lastLogin   Here is an emulation to test this:   index = _audit action IN (artifact_deleted, quota) | eval action = if(action == "quota", "true", "false") | rename user AS userPrincipalName, action AS DeviceCompliance, _time as createdDateTime | eval createdDateTime = strftime(createdDateTime, "%FT%H:%M:%S") ``` the above emulates index="audit" sourcetype="signin" userPrincipalName="*domain.com" status.errorCode=0 | rename "deviceDetail.isCompliant" as DeviceCompliance ```   Combine these, I get userPrincipalName compliant total percent lastLogin yliu 46 46 100 2024-12-05T22:04:13 splunk-system-user 134 392 34.183673469387756 2024-12-06T03:06:08
I’m in the exact same boat, the chat bot said they can’t recognize my email as a business email. The issue is that I’m a student studying cybersecurity and don’t have a business email, only my person... See more...
I’m in the exact same boat, the chat bot said they can’t recognize my email as a business email. The issue is that I’m a student studying cybersecurity and don’t have a business email, only my personal one. So what are students supposed to do when we need to learn how to use Splunk to be able to pass our courses?
Hi everyone, I’m currently working on extracting the webaclId field from AWS WAF logs and setting it as the host metadata in Splunk. However, I’ve been running into issues where the regex doesn’t se... See more...
Hi everyone, I’m currently working on extracting the webaclId field from AWS WAF logs and setting it as the host metadata in Splunk. However, I’ve been running into issues where the regex doesn’t seem to work, and Splunk throws the error:   Log Example: Below is an obfuscated example of an event from the logs I’m working with:     { "timestamp": 1733490000011, "formatVersion": 1, "webaclId": "arn:aws:wafv2:region:account-id:regional/webacl/webacl-name/resource-id", "action": "ALLOW", "httpRequest": { "clientIp": "192.0.2.1", "country": "XX", "headers": [ { "name": "Host", "value": "example.com" } ], "uri": "/v2.01/endpoint/path/resource", "httpMethod": "GET" } }    I want to extract the webacl-name from the webaclId field and set it as the host metadata in Splunk. For the above example, the desired host value should be: webacl-name Here’s my current Splunk configuration: inputs.conf: [monitor:///opt/splunk/etc/tes*.txt] disabled = false index = test sourcetype = aws:waf   props.conf:   [sourcetype::aws:waf] TRANSFORMS-set_host = extract_webacl_name   transforms.conf:   [extract_webacl_name] REGEX = \"webaclId\":\"[^:]+:[^:]+:[^:]+:[^:]+:[^:]+:regional\/webacl\/([^\/]+)\/ FORMAT = host::$1 DEST_KEY = MetaData:Host SOURCE_KEY = _raw     What I’ve Tried: I’ve validated the regex on external tools like regex101, and it works for the log structure. For example, the regex successfully extracts webacl-name from: "webaclId":"arn:aws:wafv2:region:account-id:regional/webacl/webacl-name/resource-id" Manual rex Testing in Splunk:   index=test sourcetype=aws:waf | rex field=_raw "\"webaclId\":\"[^:]+:[^:]+:[^:]+:[^:]+:[^:]+:regional\/webacl\/(?<webacl_name>[^\/]+)\/" | table _raw webacl_name     Questions: Does my transforms.conf configuration have any issues I might be missing? Is there an alternative or more efficient way to handle this extraction and rewrite the host field? Are there any known limitations or edge cases with using JSON data for MetaData:Host updates? I’d greatly appreciate any insights or suggestions. Thank you for your help!
Thanks for the question. Those links are not hard coded to version 9.3.2. When a higher version of the Splunk Enterprise docs is made public, the links will point to that new highest ("latest") versi... See more...
Thanks for the question. Those links are not hard coded to version 9.3.2. When a higher version of the Splunk Enterprise docs is made public, the links will point to that new highest ("latest") version of the topics with no manual intervention required by Splunk.  It's possible for us (intentionally or mistakenly) to hard code links to a specific version other than the "latest". If you ever see a link that points to an older version of documentation and you think it should point to "latest" feel free to let us know. You can use the docs channel in Slack or submit doc feedback on a related topic since there is no feedback form for Splexicon entries.
I have created a dashboard that takes input from the users in 4 textbox inputs and store it in a lookup file. My requirement is that tokens should be passed to the search query only after submit but... See more...
I have created a dashboard that takes input from the users in 4 textbox inputs and store it in a lookup file. My requirement is that tokens should be passed to the search query only after submit button is clicked by user. But submit button is not working as per expectation. Sometimes query is executing automatically when we click outside the text boxes or it is executing when the page is reloaded.  My second requirement is to clear the textbox once submit button is clicked. I  searched the community for similar questions ,made changes in  the code as suggested but it is not working. Thanks in advance.   <fieldset submitButton="true" autoRun="false"> <input type="text" token="usecasename" searchWhenChanged="false"> <label>Enter UseCaseName Here</label> </input> <input type="text" token="error" searchWhenChanged="false"> <label>Enter Error/Exception here</label> </input> <input type="text" token="impact" searchWhenChanged="false"> <label>Enter Impact here</label> </input> <input type="text" token="reason" searchWhenChanged="false"> <label>Enter Reason here</label> </input> </fieldset> <row depends="$hide$"> <panel> <table> <title></title> <search> <query>| stats count| fields - count|eval useCaseName="$usecasename$", "Error/Exception in logs"="$error$", Impact="$impact$", Reason="$reason$"|append[| inputlookup lookup_exceptions_all_usecase1.csv]| outputlookup lookup_exceptions_all_usecase1.csv</query> <earliest>-24h</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>   4  
This is a scripted input so it doesn't have all the mechanics associated with modular inputs - you cannot pass parameters to it by setting config items in input config stanza. But it works on UF wher... See more...
This is a scripted input so it doesn't have all the mechanics associated with modular inputs - you cannot pass parameters to it by setting config items in input config stanza. But it works on UF whereas modular inputs don't. Anyway, the scripts for ta_nix are more like examples to tune and adjust to your needs than ready-for-production.
And there is no going back to trial license. It is by design. Also remember that while searching is blocked, your environment is indexing so on the one hand you're not losing data but on the other -... See more...
And there is no going back to trial license. It is by design. Also remember that while searching is blocked, your environment is indexing so on the one hand you're not losing data but on the other - you might be still generating violations...
Once you've exceeded the ingest limit five times you are in violation of the free license and Splunk will only let you search the internal indexes.  The fix is to wait for the violations to expire (a... See more...
Once you've exceeded the ingest limit five times you are in violation of the free license and Splunk will only let you search the internal indexes.  The fix is to wait for the violations to expire (about 30 days, IIRC) or buy a license.
Building on previous answers, use fieldformat to display Duracion in the desired way while still keeping it as a number.  Then sum their values using addcoltotals. index="cdr" ... See more...
Building on previous answers, use fieldformat to display Duracion in the desired way while still keeping it as a number.  Then sum their values using addcoltotals. index="cdr" | search "Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810" "Call.ConnectTime"=* "Call.DisconnectTime"=* |lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | fieldformat Duracion=strftime(diffTime/1000, "%M:%S") | table Countrie, Duracion | addcoltotals label="Total" labelfield=Countrie Duracion  
Use stats to sum your diffTime values, then use fieldformat with tostring and the "duration" argument to display the value as a string
Hello community, I want to make it efficient when offboarding with clients. Is there an spl to find ALL of the KO's created in a particular app?
Thanks. I paste the script and result index="cdr"                                                                                                                                                   ... See more...
Thanks. I paste the script and result index="cdr"                                                                                                                                                                                                          | search "Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810" "Call.ConnectTime"=* "Call.DisconnectTime"=* |lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=strftime(diffTime/1000, "%M:%S")                                                                                                                          | table Countrie, Duracion Countrie Duracion Chile 01:17 Hong Kong 00:02 Denmark 02:01 Denmark 00:51 Denmark 00:51 Denmark 06:30 China 02:59 Uruguay 00:18
If sampling is the cause of this issue, you can disable it in your chart.  
Thanks. I paste the script and result index="cdr"                                                                                                                                                     ... See more...
Thanks. I paste the script and result index="cdr"                                                                                                                                                                                                          | search "Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810" "Call.ConnectTime"=* "Call.DisconnectTime"=* |lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=strftime(diffTime/1000, "%M:%S")                                                                                                                          | table Countrie, Duracion Countrie Duracion Chile 01:17 Hong Kong 00:02 Denmark 02:01 Denmark 00:51 Denmark 00:51 Denmark 06:30 China 02:59 Uruguay 00:18
rum.node.* metrics are page-level metrics. Page level metrics are only captured if custom URL grouping rules are configured and active. Here’s a couple of sanity checks: - check that the rule is act... See more...
rum.node.* metrics are page-level metrics. Page level metrics are only captured if custom URL grouping rules are configured and active. Here’s a couple of sanity checks: - check that the rule is active - be sure to generate traffic after the rule is active - You need at least one matching domain and path rule (create the domain rule first)
Hi, My dashboard has 2 inputs, i.e dropdown , time picker. I have a requirement where I need to provide both inputs  then only my panels should appear. I tried the same ( below dashboard code) , w... See more...
Hi, My dashboard has 2 inputs, i.e dropdown , time picker. I have a requirement where I need to provide both inputs  then only my panels should appear. I tried the same ( below dashboard code) , when first time dashboard loads , I choose both inputs and panel appears.After that when I choose another item from dropdown ( keeping the same time) nothing happens. I have to change a different time and then the respective panel appears. What should I change in the code so that even if I change only dropdown item , panel should appear for the same chosen timeframe.  Dashboard Code: <form version="1.1" theme="light"> <label>Time Picker Input</label> <description>Replicate time picker issue</description> <fieldset submitButton="false"> <input type="dropdown" token="item" searchWhenChanged="true"> <label>Select Item</label> <choice value="table1">TABLE-1</choice> <choice value="table2">TABLE-2</choice> <choice value="table3">TABLE-3</choice> <change> <condition value="table1"> <set token="tab1">"Table1"</set> <unset token="tab2"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition value="table2"> <set token="tab2">"Table2"</set> <unset token="tab1"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition value="table3"> <set token="tab3">"Table3"</set> <unset token="tab1"></unset> <unset token="tab2"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition> <unset token="tab1"></unset> <unset token="tab2"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> </change> </input> <input type="time" token="time" searchWhenChanged="true"> <label>Select Time</label> <change> <set token="is_time_selected">true</set> </change> </input> </fieldset> <row depends="$tab1$$is_time_selected$"> <panel> <table> <title>Table1</title> <search> <query> | makeresults | eval Table = "Table1" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab2$$is_time_selected$"> <panel> <table> <title>Table2</title> <search> <query> | makeresults | eval Table = "Table2" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab3$$is_time_selected$"> <panel> <table> <title>Table3</title> <search> <query> | makeresults | eval Table = "Table3" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> Thanks & Regards, Shashwat
I might start with a signal like service.request.count with a filter on sf_error=true. Then if I choose “count by sf_service” for my function and visualize as a heat map, that might be a good start. ... See more...
I might start with a signal like service.request.count with a filter on sf_error=true. Then if I choose “count by sf_service” for my function and visualize as a heat map, that might be a good start. Under “chart options” you can define color thresholds so low error counts can be green, high error counts can be red, etc. If you need to work with a value that you don’t have available, such as platform or region, you may want to look at defining those as span tags and indexing them as APM metricsets. https://docs.splunk.com/observability/en/apm/span-tags/cmms.html