All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you very much @gcusello !  You never fail to deliver best solutions for splunk newbies like me
Hi @splunky_diamond , the best guide in ad-on creation is the Splunk Add-On Builder app (https://splunkbase.splunk.com/app/2962). It guides you in the creation and in the normalization of your data... See more...
Hi @splunky_diamond , the best guide in ad-on creation is the Splunk Add-On Builder app (https://splunkbase.splunk.com/app/2962). It guides you in the creation and in the normalization of your data to have a CIM compliant data flow that you can use also in ES or ITSI. Ciao. Giuseppe
Hello Splunkers! I am collecting logs from Fudo PAM for which I haven't found any suitable existing add-on on the Splunk Base website. The logs are being collected over syslog, yet the regular "sy... See more...
Hello Splunkers! I am collecting logs from Fudo PAM for which I haven't found any suitable existing add-on on the Splunk Base website. The logs are being collected over syslog, yet the regular "syslog" sourcetype doesn't suit the events coming from my source. I was searching the web for some tutorials on how to create your own add-on in Splunk in order to parse the unusual logs like in my case, but I haven't found any.  Could someone please help me with that? Does anyone have any tutorial or guide on how to create your own parser, or can maybe explain what is needed for that, in case it's not a difficult task? If someone decides to provide answer themselves, by explaining how to create your own add-on, I would really appreciate detailed description that will involve such notes as: required skills, difficulty, how long it will take, and whether it's the best practice in such situations or there are more efficient ways. Again, the main goal for me is to get my logs from Fudo PAM (coming over syslog) parsed properly.  Thank you for taking your time reading my post and replying to it
 1. My task is to calculate number of events with "FAILED" value in "RESULT" key, it looks like this and it works (thanks to you guys!) - `index="myIndex" sourcetype ="mySourceType" | foreach "*DEV*"... See more...
 1. My task is to calculate number of events with "FAILED" value in "RESULT" key, it looks like this and it works (thanks to you guys!) - `index="myIndex" sourcetype ="mySourceType" | foreach "*DEV*" "UAT*" [| eval keep=if(isnotnull('<<FIELD>>'), 1, keep)] | where keep==1 | stats count(eval('RESULT'=="FAILED")) as FAILS | stats values(FAILS)` This gets even more confusing. 'number of events with "FAILED" value in "RESULT" key' implies that you already have a field (key) named "RESULT" that may have a value of "FAILED".  If this is correct, shouldn't your search begins with index="myIndex" sourcetype ="mySourceType" RESULT=FAILED? | stats count(eval('RESULT'=="FAILED")) as FAILS gives one single numeric value.  What is the purpose of cascading |statsvalues(FAILS) after this? | stats count(eval('RESULT'=="FAILED")) as FAILS | stats values(FAILS) gives the exact same single value. Most importantly still, as @PickleRick and I repeatedly point out, Splunk (and most programming languages) do not perform sophisticated calculations in name space, mostly because there is rarely need to do so.  When there is a serious need for manipulating variable name space, it is usually because the upstream programmer made poor design.  In Splunk's case, it is super flexible in handling data without preconceived field names.  As @bowesmana suggested, if you can demonstrate your raw data containing those special keys, it is probably much easier (and more performant) to simply use TERM() filter to limit raw events rather than trying to apply semantics in extracted field names. (TERM is case insensitive by default.)  If you find TERM() too limiting, you can also use Splunk's super flexible field extraction to extract environment groups "Prod" and "Dev" using regex.  This way, all you need to do is index="myIndex" sourcetype ="mySourceType" RESULT=FAILED environment=Dev | stats count You can even do something like index="myIndex" sourcetype ="mySourceType" RESULT=FAILED | stats count by environment Any of these alternatives is better in clarity and efficiency.
This is a Splunk forum, not a security analyst forum.  No one knows what data is in your sources.  Very few has expertise in the exact domain you work from.  If you know what data will get the answer... See more...
This is a Splunk forum, not a security analyst forum.  No one knows what data is in your sources.  Very few has expertise in the exact domain you work from.  If you know what data will get the answer you are asked but have difficulty get the result you wanted, illustrate the data and desired results, then explain the logic between the two without SPL.  Volunteers can help you from there.
SOLVED : the install.sh script was ignoring the call for the proxy script so by ruuning the runSDKproxy.sh , the problem revsolved (even in appdynamics-agent.conf the parameter to automatically star... See more...
SOLVED : the install.sh script was ignoring the call for the proxy script so by ruuning the runSDKproxy.sh , the problem revsolved (even in appdynamics-agent.conf the parameter to automatically start the proxy is ON) Creating a service for the proxy resolved this second point
Hi @dgiberson look at the macro : [cisco_ios_index] definition = (index=*)  
Hi @gcusello thanks, just confirmed then that we don't need to go especially to the MC server to see this kind of warning/error.
Hi, We are testing manual JavaScript injection in an Oracle APEX application; however, the Dev teams tell us that only the "ords/r" page is showing in the list of pages in AppDynamics, not all the "... See more...
Hi, We are testing manual JavaScript injection in an Oracle APEX application; however, the Dev teams tell us that only the "ords/r" page is showing in the list of pages in AppDynamics, not all the "internal" pages that run underneath. Anyone has experience in configuring EUM/JavaScript agent for APEX to give us a hint of how to improve the default configuration to detect all pages used within the application? Thanks, Roberto
Hi @splunkreal , it's normal that all warning or erro messages are displayed for admin on the servers that are usually accessed: SHs and MC. what's your doubt? If you think that these messages sho... See more...
Hi @splunkreal , it's normal that all warning or erro messages are displayed for admin on the servers that are usually accessed: SHs and MC. what's your doubt? If you think that these messages shouldn't be displayed, add it in Splunk Ideas (ideas.splunk.com). Ciao. Giuseppe
Hello   I'm wondering if warnings like "Local KV Store has replication issues" are shown to any admin user on any Splunk web (DMC server and any SHC member) ? Thanks.    
What do you mean by "takes 30ms"? Measured how - from when till when? Did you do a tcpdump to check packets timeline? Did you test just a single event or pushed in batch?
Hi @Jyo_Reel , I don't know what could happen, maybe someone did an error and configured data input on 8089. The only way to understand something is finding who sent this unencrypted traffic on thi... See more...
Hi @Jyo_Reel , I don't know what could happen, maybe someone did an error and configured data input on 8089. The only way to understand something is finding who sent this unencrypted traffic on this port and check its configurations. Ciao. Giuseppe P.S.: Karma Points are appreciated by all the Contributors
Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 ... See more...
Logging a single line to Splunk is taking about 30ms with the HEC appender.  e.g, the result of the below is 30ms. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 = System.currentTimeMillis(); log.info("logTime={}", start2 - start1);   This is our logback config -  Taking 30 ms is too long for a single log action. Are we missing anything in the config ?  
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding... See more...
Event Actions > Show sources failing at 100/1000 events with the below 2 errors -  [e430ac81-66f7-40b8-8c76-baa24d2813c6_wh-1f2db913c0] Streamed search execute failed because: Error in 'surrounding': Too many events (> 10000) in a single second.. Failed to find target event in final sorted event list. Cannot properly prune results The result sets are not huge.. maybe 150 events. What does the above errors mean and how do we resolve this error?
anyone faced this issues?
Yes, the on-prem search heads will be able to send queries to the AWS indexers.  Whether those queries are successful or not is another question the answer to which depends on how the indexers are co... See more...
Yes, the on-prem search heads will be able to send queries to the AWS indexers.  Whether those queries are successful or not is another question the answer to which depends on how the indexers are configured.  Are they in a cluster?  What are the replication factor and search factor settings? An indexer cluster with fully replicated and searchable data will be able to respond to search requests even if some peers are down.  The likelihood of the cluster being fully searchable goes down with each lost indexers.  If the indexers go down in rapid succession then it's possible (depending on the configuration) for some data to be unreachable.  In that case, the search requests will return incomplete results.
Try something like this | eval bucket=case(dur < 30, 0, dur <= 60, 1, dur <= 120, 2, dur <= 240, 3, dur > 240, 4) | stats count as "Number of Queries" by bucket | append [| makeresults | fields ... See more...
Try something like this | eval bucket=case(dur < 30, 0, dur <= 60, 1, dur <= 120, 2, dur <= 240, 3, dur > 240, 4) | stats count as "Number of Queries" by bucket | append [| makeresults | fields - _time | eval bucket=mvrange(0,5) | mvexpand bucket | eval "Number of Queries"=0] | stats sum('Number of Queries') as "Number of Queries" by bucket | eval bucket=mvindex(split("Less than 30sec,30sec - 60sec,1min - 2min,2min - 4min,More than 4min", ","), bucket)
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and ... See more...
From the below xml we created  a drop down for site, its working as expected, but we need a dropdown for country as well. But country data is not present in the logs. We have 2 countries, China and India. We need a drop with country and based on country site  also should be shown. How can we do this?? <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="site"> <label>SITE</label> <choice value="*">All</choice> <prefix>site="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>site</fieldForLabel> <fieldForValue>site</fieldForValue> <search> <query> | makeresults | eval site="BDC" | fields site | append [ | makeresults | eval env="SOC" | fields site ] | sort site | table site </query> </search> </input> </fieldset> <row> <panel> <table> <title>Total Count Of DataRequests</title> <search> <query> index=Datarequest-index $site$ | rex field= _raw "application :\s(?<Reqtotal>\d+)" |stats sum(Reqtotal) </query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  
Extra credit: Is there a way to force all 5 buckets to always appear in the results, even if they have a 0 count?