All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning messa... See more...
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning message when starting Splunk:   WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Could someone please help me resolve this issue? I want to ensure that Splunk uses the correct SSL certificates and that the hostname validation works properly.
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, sta... See more...
Splunk Enterprise Version: 9.2.0.1 OpenShift Version: 4.14.30   We used to have Openshift Event logs coming in under sourcetype openshift_events under index=openshift_generic_logs   However, starting Sept 29, we suddenly did not receive any logs from that index and sourcetype. The Splunkforwarders are still running and we did not do any changes to the configuration. Here is the addon.conf that we have:       004-addon.conf [general] # addons can be run in parallel with agents addon = true [input.kubernetes_events] # disable collecting kubernetes events disabled = false # override type type = openshift_events # specify Splunk index index = # (obsolete, depends on kubernetes timeout) # Set the timeout for how long request to watch events going to hang reading. # eventsWatchTimeout = 30m # (obsolete, depends on kubernetes timeout) # Ignore events last seen later that this duration. # eventsTTL = 12h # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true [input.kubernetes_watch::pods] # disable events disabled = false # Set the timeout for how often watch request should refresh the whole list refresh = 10m apiVersion = v1 kind = pod namespace = # override type type = openshift_objects # specify Splunk index index = # set output (splunk or devnull, default is [general]defaultOutput) output = # exclude managed fields from the metadata excludeManagedFields = true       Apologies if I'm missing something obvious here.   Thank you!
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this p... See more...
Hi guys, Does anyone know even with the Trial version of Splunk Observability Cloud whether it still accepts logs being sent to it directly by the Splunk Otel Collector?        According to  this page  : https://docs.splunk.com/observability/en/gdi/opentelemetry/components/splunk-hec-exporter.html , it says: "Caution - Splunk Log Observer is no longer available for new users. You can continue to use Log Observer if you already have an entitlement."       As I'm using the Trial version,  I'm just curious to see how Observability Cloud processes logs via fluentd, rather than use Log Observer Connect which uses the Universal Forwarder to send logs to Splunk Cloud/Enterprise first, and then  Observability Cloud  just views log events via the integration.  Seems that Observability Cloud is not showing  the ordinary syslog or windows events which get sent to it  automatically out of the box by the  Splunk Otel Collector. Tried setting up my own log file, but nothing shows up in O11y either.
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is ... See more...
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is introduced.            | eval delta_time = delete_time - create_time, hours=round(delta_time/3600,2)\ | where delta_time < (48 * 3600)\         This returns results in the search app but not in the app that houses this alert. The app is shared globally and all the objects within it. I also have the admin role with no restricted indexes or data.  
index= name tag=name NOT "health-*" words="Authentication words" OR MESSAGE_TEXT="Authentication word" | stats count by host | table host,count
I am working to integrate Splunk with AWS to ingest CloudTrail logs. Looking at the documentation for the Splunk Add-on for AWS, under steps 3, 4, and 8 it says to create an IAM user, an access key, ... See more...
I am working to integrate Splunk with AWS to ingest CloudTrail logs. Looking at the documentation for the Splunk Add-on for AWS, under steps 3, 4, and 8 it says to create an IAM user, an access key, and then to input the key ID and secret ID into the Splunk Add-on: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Admin/AWSGDI#Step_3:_Create_a_Splunk_Access_user Can we instead leverage a cross-account IAM role with an external ID for this purpose? We try to limit IAM user creation in our environment and this also creates additional management overhead, such as needing to regularly rotate the IAM user access key credentials. Leveraging a cross-account IAM role that can be assumed by Splunk Cloud is a much simpler (and more secure) implementation. Thanks!
Hi I have a drop down based on domain ,entity so when i select domain , entity and date selected it fetch result of initlambda,init duplicate,init error...I want to have a extra submit button ,once i... See more...
Hi I have a drop down based on domain ,entity so when i select domain , entity and date selected it fetch result of initlambda,init duplicate,init error...I want to have a extra submit button ,once i hit submit then only run the result for initlambda,init duplicate,init error otherwise dont fetch anything   <row> <panel> <title> VIEW BY ENTITY</title> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="Costing">Costing</choice> <change> <set token="inputToken">""</set> <set token="outputToken">""</set> <set token="inputToken2">""</set> <set token="outputToken2">""</set> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default>Cost</default> <initialValue>Cost</initialValue> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="false"> <label>Data Entity</label> <fieldForLabel>$tokEnvironment$</fieldForLabel> <fieldForValue>$tokEnvironment$</fieldForValue> <search> <!--<progress>--> <!-- match attribute for condition uses eval-like expression (see Splunk search language 'eval' command) --> <!-- logic: if resultCount is 0, then show a static html element, and hide the chart element --> <!-- <condition match="'job.resultCount'== 0">--> <!-- <set token="show_html">true</set>--> <!-- </condition>--> <!-- <condition>--> <!-- <unset token="show_html"/>--> <!-- </condition>--> <!-- </progress>--> <query>| makeresults | fields - _time | eval Costing="GetQuoteByCBD,bolHeader,bolLineItems,laborProcess,costSheetCalc,FOB" | fields $tokEnvironment$ | makemv $tokEnvironment$ delim="," | mvexpand $tokEnvironment$</query> </search> <change> <condition match="$label$==&quot;get&quot;"> <set token="inputToken">get</set> <set token="outputToken">get</set> <set token="inputToken2">b</set> <set token="outputToken2">b</set> <set token="inputToken3">c</set> <set token="outputToken3">c</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken5">e</set> <set token="outputToken5">e</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken3">3</set> <set token="outputToken3">3</set> <set token="apiToken">d</set> <set token="entityToken">get</set> </condition> <condition match="$label$==&quot;batch&quot;"> <set token="inputToken">batch</set> <set token="outputToken">batch</set> <set token="inputToken2">c</set> <set token="outputToken2">c</set> <set token="inputToken">b</set> <set token="outputToken4">b</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">b</set> <set token="inputToken5">f</set> <set token="outputToken5">f</set> <set token="entityToken">batch</set> </condition> </condition> <condition match="$label$==&quot;Calc&quot;"> <set token="inputToken">Calc</set> <set token="outputToken">Calc</set> <set token="inputToken2">init</set> <set token="outputToken2">init</set> <set token="inputToken">Calc</set> <set token="outputToken4">Calc</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">Calc</set> <set token="entityToken">Calc</set> </condition> </change> <default>get</default> </input> <input type="time" token="time_picker" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <html> <u1> </u1> </html> </panel> </row> <row> <panel> <title>Init Lambda</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:info:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Duplicate</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:warning:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Error</title> <table> <search> <query>index=""source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:error:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
Hi, I've an eventhub that receives data from multiple application, with different number and values of columns.  The events are typically like so (as an example) Environment ProductName UtcDate   R... See more...
Hi, I've an eventhub that receives data from multiple application, with different number and values of columns.  The events are typically like so (as an example) Environment ProductName UtcDate   RequestId Clientid ClientIp #app1 Environment ProductName UtcDate Instance Region RequestId ClientIp DeviceId #app2 Environment ProductName UtcDate  DeviceId ClientIp #app3 PROD Product1 2024-04-04T20:21:20 abcd-12345-dev bcde-ed-1234 10.12.13.14 #app1 PROD Product2 2024-04-04T20:23:20 gwa us 126d-a23d-1234-def1 10.23.45.67 abcAJHSSz12. #ap TEST Product3 2024-04-04T20:25:20 Ghsdhg1245 12.34.57.78 #app3 Environment ProductName UtcDate Instance Region RequestId ClientIp DeviceId #app2 #app at end of line, is not part of log, just to annotate the different entrie How can splunk automagically select which "format" to use with REPORT/EXTRACT in transforms? On the HeavyForwarder  transforms.conf [header1] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate,  RequestId,Clientid,ClientIp [header2] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate,Instance,Region,RequestId,ClientIp,DeviceId [header3] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate ,DeviceId ClientIp In props.conf [eventhub:sourcewithmixedsources] INDEXED_EXTRACTIONS = TSV CHECK_FOR_HEADER=true NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false pulldown_type = 1 REPORT-headers = header1, header3,header3  
Is there an integration available to push and pull to and from Palo Alto XSOAR. Looking for an integration to pull incidents and update the status.
Does Splunk support CrowdStrike OAuth API?
I have an enterprise deployment with multiple servers. All licensing is handled by a license manager. One of my indexers gives the warning "Your license is expired. Please login as an administrator t... See more...
I have an enterprise deployment with multiple servers. All licensing is handled by a license manager. One of my indexers gives the warning "Your license is expired. Please login as an administrator to update the license." When I do, licensing looks fine. It's pointed to the correct address for the license manager. Last successful contact was less than a minute ago. Under messages it says, "No licensing alerts". And under "Show all configuration details" It lists recent successful contacts and the license keys in use. That's about as far as I can go because 30 seconds in, my session will get kicked back to the login prompt with the message my session has expired.  So, I have one server out of a larger deployment that seems to think it doesn't have a license. But all indications are that it does. But it still behaves like it doesn't.   
I'm creating a splunk multisite cluster. the configuration is done as the documentation shows, so I did with the cluster node. all peers show up and tell me they are up and are happily replicating. b... See more...
I'm creating a splunk multisite cluster. the configuration is done as the documentation shows, so I did with the cluster node. all peers show up and tell me they are up and are happily replicating. but for whatever reason the search factor and replication factor is not met. the notification about the unhealthy system tells me it's the cluster node :      but - why is that? how can I check what is wrong with it? If I look up the cluster status it all seems fine (via cli)  
I'm trying to configure the indexes.conf in such a way that its data retention is exactly 180 days and then does NOT get frozen, but gets deleted.    I've tried to set it with frozenTimePeriodInSec... See more...
I'm trying to configure the indexes.conf in such a way that its data retention is exactly 180 days and then does NOT get frozen, but gets deleted.    I've tried to set it with frozenTimePeriodInSecs = 15552000 but now I get the following error:    Validation errors are present in the bundle. Errors=peer=XXX, stanza=someidx Required parameter=thawedPath not configured;   so I HAVE TO put a thawed path in it even tho I don't want to freeze anything? how does that make sense?    Kind regards for a clarification!
Hello Everyone, My below splunk query works fine in normal splunk search and it returns expected results:   index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort ... See more...
Hello Everyone, My below splunk query works fine in normal splunk search and it returns expected results:   index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort kubernetes_cluster   However when the same query when I have it in dashboard's dropdown it is not returning that data. Search on Change is unchecked. the dropdown looks like this: source view: <input type="dropdown" token="regions" searchWhenChanged="false"> <label>region</label> <fieldForLabel>regions</fieldForLabel> <fieldForValue>regions</fieldForValue> <search> <query>index="my_index" | stats count by kubernetes_cluster | table kubernetes_cluster | sort kubernetes_cluster</query> <earliest>0</earliest> <latest></latest> </search> </input>    
Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       inde... See more...
Hi, I am trying to create a Transaction where my starting and ending 'event' are not always showing the correct overview.  I expect the yellow marked group events as result:       index=app sourcetype=prd_wcs host=EULMFCP1WVND121 "EquipmentStatusRequest\"=" D0022 | eval _raw = replace(_raw, "\\\\", "") | eval _raw = replace(_raw, "\"", "") | rex "Chute:DTT_S01.DA01.(?<Door>[^\,]+)" | rex "EquipmentName:DTT_S01.DA01.(?<EquipmentName>[^\,]+)" | rex "EquipmentType:(?<EquipmentType>[^\,]+)" | rex "Status:(?<EquipmentStatus>[^\,]+)" | rex "TypeOfMessage:(?<TypeOfMessage>[^\}]+)" | eval Code = EquipmentStatus+"-"+TypeOfMessage+"-"+EquipmentType | lookup Cortez_SS_Reasons.csv CODE as Code output STATE as ReasonCode | where ReasonCode = "Ready" OR ReasonCode = "Full" | transaction EquipmentName startswith=(ReasonCode="Full") endswith=(ReasonCode="Ready") | eval latestTS = _time + duration | eval counter=1 | accum counter as Row | table _time latestTS Row ReasonCode | eval latestTS=strftime(latestTS,"%Y-%m-%d %H:%M:%S.%3N")   The script above is showing the following overview as result and the marked line is not correct. I don't know how this is happened. Because, I expect that Transaction function will always take first events starting with "Ready" and ending with "Full"..  Thanks in advance.  
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","ti... See more...
I have a JSON data like this.   "suite":[{"hostname":"localhost","failures":0,"package":"ABC","tests":0,"name":"ABC_test","id":0,"time":0,"errors":0,"testcase":[{"classname":"xyz","name":"foo1","time":0,"status":"Passed"},{"classname":"pqr","name":"foo2)","time":0,"status":"Passed"},....   I want to create a table with Suite testcase_name and Testcase_status as columns. I have a solution using mvexpand command. But when there is large data output gets truncated using mvexpand command.   ....| spath output=suite path=suite{}.name | spath output=Testcase path=suite{}.testcase{}.name| spath output=Error path=suite{}.testcase{}.error | spath output=Status path=suite{}.testcase{}.status|search (suite="*") | eval x=mvzip(Testcase,Status) | mvexpand x|eval y=split(x,",")|eval Testcase=mvindex(y,0) | search Testcase IN ("***") | eval suite=mvdedup(suite) |eval Status=mvindex(y,1) |table "Suite" "TestCase" Status   This is the query im using. But the results gets truncated. Is there any alternative for mvexpand so that i can edit the above query ?
Hi Team  Can you please let me know how can i use the below Field extraction formula directly using the rex command ?  Field extraction formula :  ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?... See more...
Hi Team  Can you please let me know how can i use the below Field extraction formula directly using the rex command ?  Field extraction formula :  ^(?:[^,\n]*,){7}\s+"\w+_\w+_\w+_\w+_\w+":\s+"(?P<POH>[^"]+)      
Hi I have a Dashboard and i want to add a button , so when somebody solves that particular issue he/she can click on that button and it will change status to solved and it will be removed from dashbo... See more...
Hi I have a Dashboard and i want to add a button , so when somebody solves that particular issue he/she can click on that button and it will change status to solved and it will be removed from dashboard. for eg: I have a issue on a device and i solved that issue so then i can click on that button and it will make that issue status solved or will be removed from the dashboard.
I am finally successful in connecting Splunk with Power BI. But while adding a new source and getting data in Power BI, the data models I see are different from those I see in the Splunk datasets tab... See more...
I am finally successful in connecting Splunk with Power BI. But while adding a new source and getting data in Power BI, the data models I see are different from those I see in the Splunk datasets tab's interface and I also do not find the table view I created in Splunk.
Hi everyone, My name is Emmanuel Katto. I’m currently working on a project where I need to analyze large datasets in Splunk, and I've noticed that the search performance tends to degrade as the data... See more...
Hi everyone, My name is Emmanuel Katto. I’m currently working on a project where I need to analyze large datasets in Splunk, and I've noticed that the search performance tends to degrade as the dataset size increases. I'm looking for best practices or tips on how to optimize search performance in Splunk.   What are the recommended indexing strategies for managing large volumes of data efficiently? Are there particular search query optimizations I should consider to speed up the execution time, especially with complex queries? How can I effectively utilize data models to improve performance in my searches? I appreciate any insights or experiences you can share. Thank you in advance for your help! Best, Emmanuel Katto