All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My Dasboard runs the result of each row pannel even without the submit button is clicked....Trie giving the autoRun= false,searchwhenchanged=false.and submit button tag is added so search is only tri... See more...
My Dasboard runs the result of each row pannel even without the submit button is clicked....Trie giving the autoRun= false,searchwhenchanged=false.and submit button tag is added so search is only triggered when select dropdown & time is selected and they press submit button
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is ... See more...
I have two of the exact same searches and one works within the search app but not this custom internal app that package the savedsearch.   The search works for both apps until the where command is introduced.            | eval delta_time = delete_time - create_time, hours=round(delta_time/3600,2)\ | where delta_time < (48 * 3600)\         This returns results in the search app but not in the app that houses this alert. The app is shared globally and all the objects within it. I also have the admin role with no restricted indexes or data.  
Typically this is how we set up the license slaves. I recommend try the below.  By the use the CLI commmand on the bin directory on the indexer: ./splunk edit licenser-localslave -master_uri ht... See more...
Typically this is how we set up the license slaves. I recommend try the below.  By the use the CLI commmand on the bin directory on the indexer: ./splunk edit licenser-localslave -master_uri https://licensemaster:8089 Or you can update the server.conf file with: [license] master_uri = https://mylicensemasterserver:8089   Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/LicenserCLIcommands I would recommend opening a support case if you are stuck even after this.   Hope this helps
1. If you're doing indexed extractions, your data is processed as parsed. Adding search-time extractions will only result in double fields (or misassigned fields in case of not-well-defined formats).... See more...
1. If you're doing indexed extractions, your data is processed as parsed. Adding search-time extractions will only result in double fields (or misassigned fields in case of not-well-defined formats). 2. In general, unless you have a file input with header specifying fields within that file there's no way to assign fields dynamically to indexed-extraction fields. 3. You could try making search-time extraction definitions that match only specific message templates. Like REPORT-fields-for-app1 = ^(?<Environment>\S+)\s+(?<ProductName>\S+)\s+\(?<UtcDate>\S+)\s+(<RequestId>\S+)\s+(?<ClientId>\S+)\s+(?<ClientIp>\d+\.\d+\.\d+\.\d+)$ This should match only data for app1 because it has specific number of whitespace-separated files and has IP value anchored in a particular place within an event. You can have several other similar extraction definitions, each covering separate event template.
https://www.cisco.com/c/en/us/td/docs/security/cisco-secure-cloud-app/user-guide/cisco-security-cloud-user-guide.html
To solve your problem with third-party users and time range flexibility, try this: Make a new, summary index just for this data. Set up an automatic search that puts the data required in this new ... See more...
To solve your problem with third-party users and time range flexibility, try this: Make a new, summary index just for this data. Set up an automatic search that puts the data required in this new index regularly. Change your dashboard to use this new index. Give the third-party users access to only this new index. Now you can add a time range picker to your dashboard. Hope this helps. Karma would be appreciated. 
       1. You can start with your base search.  Add a time range and average calculation: index=* tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | ... See more...
       1. You can start with your base search.  Add a time range and average calculation: index=* tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | bucket _time span=1d | stats count as daily_count by host, _time | stats avg(daily_count) as avg_daily_count by host           3. Create a dashboard and add a table panel using this search.         4. Add visualizations like bar charts to represent the data graphically Key Metrics to Track: Total AUTHZ attempts Successful vs. failed authorizations logins Authorization attempts by host Authorization attempts by user Peak authorization times Unusual patterns or anomalies Dashboard Components: Summary statistics panel Time series graph of authorization attempts Top hosts by authorization usage (table or bar chart) Top users by authorization attempts (table or bar chart) Geographical map of authorization attempts (if applicable) Failed authorization attempts breakdown      Below Links should help you out. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchTutorial/Createnewdashboard https://www.splunk.com/en_us/resources/videos/create-dashboard-in-splunk-enterprise.html https://splunkbase.splunk.com/app/1603 Hope this helps  
index= name tag=name NOT "health-*" words="Authentication words" OR MESSAGE_TEXT="Authentication word" | stats count by host | table host,count
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who ar... See more...
@sainag_splunk  This method did work until I found out that the User who are viewing the dashboard are not able to see the results, and its due to not having access to the Index. The users who are viewing this dashboard are third party and people that we do not want to have access to the Index (example... outside of the Org users) hence the reason the dashboard used saved reports where its viewable, but like I mentioned we faced the issue of changing the Time range picker since the saved reports are showing in a static, where we wish to make it  change as we specify a time range with the Input.   So Inline searches would not work in this scenario
I am working to integrate Splunk with AWS to ingest CloudTrail logs. Looking at the documentation for the Splunk Add-on for AWS, under steps 3, 4, and 8 it says to create an IAM user, an access key, ... See more...
I am working to integrate Splunk with AWS to ingest CloudTrail logs. Looking at the documentation for the Splunk Add-on for AWS, under steps 3, 4, and 8 it says to create an IAM user, an access key, and then to input the key ID and secret ID into the Splunk Add-on: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Admin/AWSGDI#Step_3:_Create_a_Splunk_Access_user Can we instead leverage a cross-account IAM role with an external ID for this purpose? We try to limit IAM user creation in our environment and this also creates additional management overhead, such as needing to regularly rotate the IAM user access key credentials. Leveraging a cross-account IAM role that can be assumed by Splunk Cloud is a much simpler (and more secure) implementation. Thanks!
Hi I have a drop down based on domain ,entity so when i select domain , entity and date selected it fetch result of initlambda,init duplicate,init error...I want to have a extra submit button ,once i... See more...
Hi I have a drop down based on domain ,entity so when i select domain , entity and date selected it fetch result of initlambda,init duplicate,init error...I want to have a extra submit button ,once i hit submit then only run the result for initlambda,init duplicate,init error otherwise dont fetch anything   <row> <panel> <title> VIEW BY ENTITY</title> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="Costing">Costing</choice> <change> <set token="inputToken">""</set> <set token="outputToken">""</set> <set token="inputToken2">""</set> <set token="outputToken2">""</set> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default>Cost</default> <initialValue>Cost</initialValue> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="false"> <label>Data Entity</label> <fieldForLabel>$tokEnvironment$</fieldForLabel> <fieldForValue>$tokEnvironment$</fieldForValue> <search> <!--<progress>--> <!-- match attribute for condition uses eval-like expression (see Splunk search language 'eval' command) --> <!-- logic: if resultCount is 0, then show a static html element, and hide the chart element --> <!-- <condition match="'job.resultCount'== 0">--> <!-- <set token="show_html">true</set>--> <!-- </condition>--> <!-- <condition>--> <!-- <unset token="show_html"/>--> <!-- </condition>--> <!-- </progress>--> <query>| makeresults | fields - _time | eval Costing="GetQuoteByCBD,bolHeader,bolLineItems,laborProcess,costSheetCalc,FOB" | fields $tokEnvironment$ | makemv $tokEnvironment$ delim="," | mvexpand $tokEnvironment$</query> </search> <change> <condition match="$label$==&quot;get&quot;"> <set token="inputToken">get</set> <set token="outputToken">get</set> <set token="inputToken2">b</set> <set token="outputToken2">b</set> <set token="inputToken3">c</set> <set token="outputToken3">c</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken5">e</set> <set token="outputToken5">e</set> <set token="inputToken4">d</set> <set token="outputToken4">d</set> <set token="inputToken3">3</set> <set token="outputToken3">3</set> <set token="apiToken">d</set> <set token="entityToken">get</set> </condition> <condition match="$label$==&quot;batch&quot;"> <set token="inputToken">batch</set> <set token="outputToken">batch</set> <set token="inputToken2">c</set> <set token="outputToken2">c</set> <set token="inputToken">b</set> <set token="outputToken4">b</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">b</set> <set token="inputToken5">f</set> <set token="outputToken5">f</set> <set token="entityToken">batch</set> </condition> </condition> <condition match="$label$==&quot;Calc&quot;"> <set token="inputToken">Calc</set> <set token="outputToken">Calc</set> <set token="inputToken2">init</set> <set token="outputToken2">init</set> <set token="inputToken">Calc</set> <set token="outputToken4">Calc</set> <set token="inputToken3">d</set> <set token="outputToken3">d</set> <set token="apiToken">Calc</set> <set token="entityToken">Calc</set> </condition> </change> <default>get</default> </input> <input type="time" token="time_picker" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <html> <u1> </u1> </html> </panel> </row> <row> <panel> <title>Init Lambda</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:info:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Duplicate</title> <table> <search> <query>index="" source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:warning:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Init Lambda - Error</title> <table> <search> <query>index=""source IN ("/aws/lambda/aa-$outputToken$-$stageToken$-$outputToken2$") | spath msg | search msg="gemini:streaming:error:*" | stats count by msg</query> <earliest>$time_picker.earliest$</earliest> <latest>$time_picker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>
@richgalloway  thanks. It worked.
Hi, I've an eventhub that receives data from multiple application, with different number and values of columns.  The events are typically like so (as an example) Environment ProductName UtcDate   R... See more...
Hi, I've an eventhub that receives data from multiple application, with different number and values of columns.  The events are typically like so (as an example) Environment ProductName UtcDate   RequestId Clientid ClientIp #app1 Environment ProductName UtcDate Instance Region RequestId ClientIp DeviceId #app2 Environment ProductName UtcDate  DeviceId ClientIp #app3 PROD Product1 2024-04-04T20:21:20 abcd-12345-dev bcde-ed-1234 10.12.13.14 #app1 PROD Product2 2024-04-04T20:23:20 gwa us 126d-a23d-1234-def1 10.23.45.67 abcAJHSSz12. #ap TEST Product3 2024-04-04T20:25:20 Ghsdhg1245 12.34.57.78 #app3 Environment ProductName UtcDate Instance Region RequestId ClientIp DeviceId #app2 #app at end of line, is not part of log, just to annotate the different entrie How can splunk automagically select which "format" to use with REPORT/EXTRACT in transforms? On the HeavyForwarder  transforms.conf [header1] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate,  RequestId,Clientid,ClientIp [header2] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate,Instance,Region,RequestId,ClientIp,DeviceId [header3] DELIMS="\t" FIELDS=Environment,ProductName,UtcDate ,DeviceId ClientIp In props.conf [eventhub:sourcewithmixedsources] INDEXED_EXTRACTIONS = TSV CHECK_FOR_HEADER=true NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false pulldown_type = 1 REPORT-headers = header1, header3,header3  
Is there an integration available to push and pull to and from Palo Alto XSOAR. Looking for an integration to pull incidents and update the status.
The indexers are clustered. I also have two search heads, a deployment server and an HF. The license manager server is offsite and held at a higher level organizationally speaking. I dont have direct... See more...
The indexers are clustered. I also have two search heads, a deployment server and an HF. The license manager server is offsite and held at a higher level organizationally speaking. I dont have direct access to it.  Did you try restarting Splunk from the backend?  Yes.  That did not resolve it.  Lookout the license files in $SPLUNK_HOME/etc/licenses I just see the temp trial license folder. I checked another server and see the same thing. Should there be anything else there? 
See bolded text java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Scarborough Network Database Agent" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextS... See more...
See bolded text java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Scarborough Network Database Agent" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -jar <db_agent_home>\db-agent.jar
Hi @Liping.Zhang, I did some digging around and found this. I hope it's able to lead you to some further investigation. In this case it is almost certainly the singularityheader injected by the ... See more...
Hi @Liping.Zhang, I did some digging around and found this. I hope it's able to lead you to some further investigation. In this case it is almost certainly the singularityheader injected by the agent which is at fault (Unicode: 0x73 is lowercase 's')
 It is a Splunk supported SOAR connector.  https://github.com/splunk-soar-connectors/crowdstrikeoauth
Does Splunk support CrowdStrike OAuth API?
Can you please provide your  high level Splunk topology? How many components do you have ? is this clustered etc. Did you try restarting Splunk from the backend?  Lookout the license files in $... See more...
Can you please provide your  high level Splunk topology? How many components do you have ? is this clustered etc. Did you try restarting Splunk from the backend?  Lookout the license files in $SPLUNK_HOME/etc/licenses