All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am working to integrate Splunk with AWS to ingest CloudTrail logs. Looking at the documentation for the Splunk Add-on for AWS, under steps 3, 4, and 8 it says to create an IAM user, an access key, ... See more...
I am working to integrate Splunk with AWS to ingest CloudTrail logs. Looking at the documentation for the Splunk Add-on for AWS, under steps 3, 4, and 8 it says to create an IAM user, an access key, and then to input the key ID and secret ID into the Splunk Add-on: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Admin/AWSGDI#Step_3:_Create_a_Splunk_Access_user Can we instead leverage a cross-account IAM role with an external ID for this purpose? We try to limit IAM user creation in our environment and this also creates additional management overhead, such as needing to regularly rotate the IAM user access key credentials. Leveraging a cross-account IAM role that can be assumed by Splunk Cloud is a much simpler (and more secure) implementation. Thanks!
Hi I have a drop down based on domain ,entity so when i select domain , entity and date selected it fetch result of initlambda,init duplicate,init error...I want to have a extra submit button ,once i... See more...
Hi I have a drop down based on domain ,entity so when i select domain , entity and date selected it fetch result of initlambda,init duplicate,init error...I want to have a extra submit button ,once i hit submit then only run the result for initlambda,init duplicate,init error otherwise dont fetch anything   <row> <panel> <title> VIEW BY ENTITY</title> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="Costing">Costing</choice> <change> <set token="inputToken">""</set> <set token="outputToken">""</set> <set token="inputToken2">""</set> <set token="outputToken2">""</set> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default>Cost</default> <initialValue>Cost</initialValue> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="false"> <label>Data Entity</label> <fieldForLabel>$tokEnvironment$</fieldForLabel> <fieldForValue>$tokEnvironment$</fieldForValue> <search> <!--<progress>--> <!-- match attribute for condition uses eval-like expression (see Splunk search language 'eval' command) --> <!-- logic: if resultCount is 0, then show a static html element, and hide the chart element --> <!-- <condition match="'job.resultCount'== 0">--> <!-- <set token="show_html">true</set>--> <!-- </condition>--> <!-- <condition>--> <!-- <unset token="show_html"/>--> <!-- </condition>--> <!-- </progress>--> <query>| makeresults | fields - _time | eval Costing="GetQuoteByCBD,bolHeader,bolLineItems,laborProcess,costSheetCalc,FOB" | fields $tokEnvironment$ | makemv $tokEnvironment$ delim="," | mvexpand $tokEnvironment$</query> </search> <change> <condition match="$label$==&quot;get&quot;"> <set token="inputToken">get</set> <set token="outputToken">get</set> <set token="inputToken2">b</set> <set token="outputToken2">b</set> <set token="inputToken3">c</set> <set token="outputToken3">c</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken5">e</set> <set token="outputToken5">e</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken3">3</set> <set token="outputToken3">3</set> <set token="apiToken">d</set> <set token="entityToken">get</set> </condition> <condition match="$label$==&quot;batch&quot;"> <set token="inputToken">batch</set> <set token="outputToken">batch</set> <set token="inputToken2">c</set> <set token="outputToken2">c</set> <set token="inputToken">b</set> <set token="outputToken4">b</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">b</set> <set token="inputToken5">f</set> <set token="outputToken5">f</set> <set token="entityToken">batch</set> </condition> </condition> <condition match="$label$==&quot;Calc&quot;"> <set token="inputToken">Calc</set> <set token="outputToken">Calc</set> <set token="inputToken2">init</set> <set token="outputToken2">init</set> <set token="inputToken">Calc</set> <set token="outputToken4">Calc</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">Calc</set> <set token="entityToken">Calc</set> </condition> </change> <default>get</default> </input> <input type="time" token="time_picker" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <html> <u1> </u1> </html> </panel> </row> <row> <panel> <title>Init Lambda</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:info:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Duplicate</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:warning:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Error</title> <table> <search> <query>index=""source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:error:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
@richgalloway  thanks. It worked.
Hi, I've an eventhub that receives data from multiple application, with different number and values of columns.  The events are typically like so (as an example) Environment ProductName UtcDate   R... See more...
Hi, I've an eventhub that receives data from multiple application, with different number and values of columns.  The events are typically like so (as an example) Environment ProductName UtcDate   RequestId Clientid ClientIp #app1 Environment ProductName UtcDate Instance Region RequestId ClientIp DeviceId #app2 Environment ProductName UtcDate  DeviceId ClientIp #app3 PROD Product1 2024-04-04T20:21:20 abcd-12345-dev bcde-ed-1234 10.12.13.14 #app1 PROD Product2 2024-04-04T20:23:20 gwa us 126d-a23d-1234-def1 10.23.45.67 abcAJHSSz12. #ap TEST Product3 2024-04-04T20:25:20 Ghsdhg1245 12.34.57.78 #app3 Environment ProductName UtcDate Instance Region RequestId ClientIp DeviceId #app2 #app at end of line, is not part of log, just to annotate the different entrie How can splunk automagically select which "format" to use with REPORT/EXTRACT in transforms? On the HeavyForwarder  transforms.conf [header1] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate,  RequestId,Clientid,ClientIp [header2] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate,Instance,Region,RequestId,ClientIp,DeviceId [header3] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate ,DeviceId ClientIp In props.conf [eventhub:sourcewithmixedsources] INDEXED_EXTRACTIONS = TSV CHECK_FOR_HEADER=true NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false pulldown_type = 1 REPORT-headers = header1, header3,header3  
Is there an integration available to push and pull to and from Palo Alto XSOAR. Looking for an integration to pull incidents and update the status.
The indexers are clustered. I also have two search heads, a deployment server and an HF. The license manager server is offsite and held at a higher level organizationally speaking. I dont have direct... See more...
The indexers are clustered. I also have two search heads, a deployment server and an HF. The license manager server is offsite and held at a higher level organizationally speaking. I dont have direct access to it.  Did you try restarting Splunk from the backend?  Yes.  That did not resolve it.  Lookout the license files in $SPLUNK_HOME/etc/licenses I just see the temp trial license folder. I checked another server and see the same thing. Should there be anything else there? 
See bolded text java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Scarborough Network Database Agent" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextS... See more...
See bolded text java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Scarborough Network Database Agent" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -jar <db_agent_home>\db-agent.jar
Hi @Liping.Zhang, I did some digging around and found this. I hope it's able to lead you to some further investigation. In this case it is almost certainly the singularityheader injected by the ... See more...
Hi @Liping.Zhang, I did some digging around and found this. I hope it's able to lead you to some further investigation. In this case it is almost certainly the singularityheader injected by the agent which is at fault (Unicode: 0x73 is lowercase 's')
 It is a Splunk supported SOAR connector.  https://github.com/splunk-soar-connectors/crowdstrikeoauth
Does Splunk support CrowdStrike OAuth API?
Can you please provide your  high level Splunk topology? How many components do you have ? is this clustered etc. Did you try restarting Splunk from the backend?  Lookout the license files in $... See more...
Can you please provide your  high level Splunk topology? How many components do you have ? is this clustered etc. Did you try restarting Splunk from the backend?  Lookout the license files in $SPLUNK_HOME/etc/licenses
Hello, do you see any corrupt buckets? Sometimes restarting the CM has been known to clear the bucket fix up stuck tasks in the past. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/Inde... See more...
Hello, do you see any corrupt buckets? Sometimes restarting the CM has been known to clear the bucket fix up stuck tasks in the past. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Anomalousbuckets
Thanks for the suggestion, but I don't see a magnifying glass on any of the panels on the overview screen like on normal dashboard panels.  I'm logged in as admin.
I have the same issue If I enforce the HttpOnly settings for all cookies, splunk is not anymore working correctly
I have an enterprise deployment with multiple servers. All licensing is handled by a license manager. One of my indexers gives the warning "Your license is expired. Please login as an administrator t... See more...
I have an enterprise deployment with multiple servers. All licensing is handled by a license manager. One of my indexers gives the warning "Your license is expired. Please login as an administrator to update the license." When I do, licensing looks fine. It's pointed to the correct address for the license manager. Last successful contact was less than a minute ago. Under messages it says, "No licensing alerts". And under "Show all configuration details" It lists recent successful contacts and the license keys in use. That's about as far as I can go because 30 seconds in, my session will get kicked back to the login prompt with the message my session has expired.  So, I have one server out of a larger deployment that seems to think it doesn't have a license. But all indications are that it does. But it still behaves like it doesn't.   
I'm creating a splunk multisite cluster. the configuration is done as the documentation shows, so I did with the cluster node. all peers show up and tell me they are up and are happily replicating. b... See more...
I'm creating a splunk multisite cluster. the configuration is done as the documentation shows, so I did with the cluster node. all peers show up and tell me they are up and are happily replicating. but for whatever reason the search factor and replication factor is not met. the notification about the unhealthy system tells me it's the cluster node :      but - why is that? how can I check what is wrong with it? If I look up the cluster status it all seems fine (via cli)  
Thanks a lot !
Thawed path is the directory in which you'd have to manually put the data to be thawed (or where Splunk puts it after thawing; I don't remember I don't generally thaw buckets). It doesn't have anythi... See more...
Thawed path is the directory in which you'd have to manually put the data to be thawed (or where Splunk puts it after thawing; I don't remember I don't generally thaw buckets). It doesn't have anything to do with the freezing process. If you don't define frozen path (and freeze script) the data will get deleted when rolled to frozen. And be aware of what @gcusello said - data is rolled on a per bucket basis which means that "resolution" of the bucket rolling process depends on the contents of the buckets - data is being rolled to frozen when _newest_ event in a bucket is older than the retention period. That can be important especially in case of quarantine buckets.
Your input data is definitely _not_ in the same order as shown in the opening post.
Hi @avoelk , you don't need to allocate any disk space: the thawed path is only a mount point that you can use to recover frozen buckets, if you don't need it, you must only define the mount point (... See more...
Hi @avoelk , you don't need to allocate any disk space: the thawed path is only a mount point that you can use to recover frozen buckets, if you don't need it, you must only define the mount point (the thawed_path) in indexes.conf and then you don't need to allocate any disk space. Ciao. Giuseppe