All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is a Splunk forum, not a security analyst forum.  No one knows what data is in your sources.  Very few has expertise in the exact domain you work from.  If you know what data will get the answer... See more...
This is a Splunk forum, not a security analyst forum.  No one knows what data is in your sources.  Very few has expertise in the exact domain you work from.  If you know what data will get the answer you are asked but have difficulty get the result you wanted, illustrate the data and desired results, then explain the logic between the two without SPL.  Volunteers can help you from there.
SOLVED : the install.sh script was ignoring the call for the proxy script so by ruuning the runSDKproxy.sh , the problem revsolved (even in appdynamics-agent.conf the parameter to automatically star... See more...
SOLVED : the install.sh script was ignoring the call for the proxy script so by ruuning the runSDKproxy.sh , the problem revsolved (even in appdynamics-agent.conf the parameter to automatically start the proxy is ON) Creating a service for the proxy resolved this second point
Hi @dgiberson look at the macro : [cisco_ios_index] definition = (index=*)  
Hi @gcusello thanks, just confirmed then that we don't need to go especially to the MC server to see this kind of warning/error.
Hi, We are testing manual JavaScript injection in an Oracle APEX application; however, the Dev teams tell us that only the "ords/r" page is showing in the list of pages in AppDynamics, not all the "... See more...
Hi, We are testing manual JavaScript injection in an Oracle APEX application; however, the Dev teams tell us that only the "ords/r" page is showing in the list of pages in AppDynamics, not all the "internal" pages that run underneath. Anyone has experience in configuring EUM/JavaScript agent for APEX to give us a hint of how to improve the default configuration to detect all pages used within the application? Thanks, Roberto
Hi @splunkreal , it's normal that all warning or erro messages are displayed for admin on the servers that are usually accessed: SHs and MC. what's your doubt? If you think that these messages sho... See more...
Hi @splunkreal , it's normal that all warning or erro messages are displayed for admin on the servers that are usually accessed: SHs and MC. what's your doubt? If you think that these messages shouldn't be displayed, add it in Splunk Ideas (ideas.splunk.com). Ciao. Giuseppe
Hello   I'm wondering if warnings like "Local KV Store has replication issues" are shown to any admin user on any Splunk web (DMC server and any SHC member) ? Thanks.    
What do you mean by "takes 30ms"? Measured how - from when till when? Did you do a tcpdump to check packets timeline? Did you test just a single event or pushed in batch?
Hi @Jyo_Reel , I don't know what could happen, maybe someone did an error and configured data input on 8089. The only way to understand something is finding who sent this unencrypted traffic on thi... See more...
Hi @Jyo_Reel , I don't know what could happen, maybe someone did an error and configured data input on 8089. The only way to understand something is finding who sent this unencrypted traffic on this port and check its configurations. Ciao. Giuseppe P.S.: Karma Points are appreciated by all the Contributors
Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 ... See more...
Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 = System.currentTimeMillis(); log.info("logTime={}", start2 - start1);   This is our logback config -  Taking 30 ms is too long for a single log action. Are we missing anything in the config ?  
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding... See more...
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding': Too many events (> 10000) in a single second.. Failed to find target event in final sorted event list. Cannot properly prune results The result sets are not huge.. maybe 150 events. What does the above errors mean and how do we resolve this error?
anyone faced this issues?
Yes, the on-prem search heads will be able to send queries to the AWS indexers.  Whether those queries are successful or not is another question the answer to which depends on how the indexers are co... See more...
Yes, the on-prem search heads will be able to send queries to the AWS indexers.  Whether those queries are successful or not is another question the answer to which depends on how the indexers are configured.  Are they in a cluster?  What are the replication factor and search factor settings? An indexer cluster with fully replicated and searchable data will be able to respond to search requests even if some peers are down.  The likelihood of the cluster being fully searchable goes down with each lost indexers.  If the indexers go down in rapid succession then it's possible (depending on the configuration) for some data to be unreachable.  In that case, the search requests will return incomplete results.
Try something like this | eval bucket=case(dur < 30, 0, dur <= 60, 1, dur <= 120, 2, dur <= 240, 3, dur > 240, 4) | stats count as "Number of Queries" by bucket | append [| makeresults | fields ... See more...
Try something like this | eval bucket=case(dur < 30, 0, dur <= 60, 1, dur <= 120, 2, dur <= 240, 3, dur > 240, 4) | stats count as "Number of Queries" by bucket | append [| makeresults | fields - _time | eval bucket=mvrange(0,5) | mvexpand bucket | eval "Number of Queries"=0] | stats sum('Number of Queries') as "Number of Queries" by bucket | eval bucket=mvindex(split("Less than 30sec,30sec - 60sec,1min - 2min,2min - 4min,More than 4min", ","), bucket)
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and ... See more...
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and India. We need a drop with country and based on country site  also should be shown. How can we do this?? <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="site"> <label>SITE</label> <choice value="*">All</choice> <prefix>site="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>site</fieldForLabel> <fieldForValue>site</fieldForValue> <search> <query> | makeresults | eval site="BDC" | fields site | append [ | makeresults | eval env="SOC" | fields site ] | sort site | table site </query> </search> </input> </fieldset> <row> <panel> <table> <title>Total Count Of DataRequests</title> <search> <query> index=Datarequest-index $site$ | rex field= _raw "application :\s(?<Reqtotal>\d+)" |stats sum(Reqtotal) </query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  
Extra credit: Is there a way to force all 5 buckets to always appear in the results, even if they have a 0 count?
Thank you!  I was so close lol. I hacked it by prepending "  " and " " to a couple of bucket names to force them to sort ahead, but that made me cringe.  This is far better.  Thanks again!
Ugh. As I remember from quite a few years back, tomcat logs are awful to deal with. How are you rotating them? I suppose you're trying logrotate with copytruncate option (because that was the only w... See more...
Ugh. As I remember from quite a few years back, tomcat logs are awful to deal with. How are you rotating them? I suppose you're trying logrotate with copytruncate option (because that was the only way that even remotely resembled a "working" solution for rotating this). The problem I remember from my previous job was that in this case java wouldn't "rewind" the file position pointer and would continue to append to the old file position even though the file got truncated which would mean that you ended up with a sparse file filled with "virtual zeros" up to the previous logfile's end. catalina.out is a very ugly thing to deal with. As far as I remember, it didn't rotate on its own and if you wanted to "normally" rotate it you'd have to restart your tomcat completely which is a huge PITA.
Have you tested if it works for both /raw and /event endpoints? Just asking because I haven't used it on HEC so I don't know
What do you mean by "fixed"? Assigning _meta worked "since always" (I've been using it for last 5 years or so). But since it's a single setting, you can't just stack separate definitions from multi... See more...
What do you mean by "fixed"? Assigning _meta worked "since always" (I've been using it for last 5 years or so). But since it's a single setting, you can't just stack separate definitions from multiple files. Only one will be the "winning" one according to normal rules of config precedence.