All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Nawab , did you configured a drilldown search for your Correlation Search? it's not automatic. Ciao. Giuseppe
Hi @Nawab, in this way you can display these fields in the Incident Review dashboard, I'm not sure that's possible to have a dinamic Rule Name! Anyway, why? having different Rule Names you cannot ... See more...
Hi @Nawab, in this way you can display these fields in the Incident Review dashboard, I'm not sure that's possible to have a dinamic Rule Name! Anyway, why? having different Rule Names you cannot have statistic and grouping of Rules. It's instead very important to have the needed information in the Incident Review dashboard. Ciao, Giuseppe
also, the drill down search is not available as well
yes i have these fields in my coorelation search, but when i set notable name, it only shows the rule name instead of fileds i have added.   test_alert $src$ $dest$ $user$
Hi @Nawab, to display additional fields in the Incident Review dashboard, you have to chech if these fields are present in the Correlation Search that creates the Notable. If they are, you can cust... See more...
Hi @Nawab, to display additional fields in the Incident Review dashboard, you have to chech if these fields are present in the Correlation Search that creates the Notable. If they are, you can customize your dashboard in [ Configure > Incident Management > Incident Review Settings > Incient Review - Table Attributes ]. Ciao. Giuseppe
Hi @Harish2 , it's clear, in your event's you haven't the starting hours (12:00). As I described in my first answer, you have to manage hours in a different way (outside the lookup): | tstats ... See more...
Hi @Harish2 , it's clear, in your event's you haven't the starting hours (12:00). As I described in my first answer, you have to manage hours in a different way (outside the lookup): | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host | eval date=strftime(_time,"%Y-%m-%d"), day=strftime(_time, "%d"), hour=strftime(_time, "%H") | search NOT (hour<8 OR hour>11 OR [ | inputlookup calendsr.csv WHERE type="holyday" | fields date ] ) | fields - _time day hour obviously, using the date in the lookup without hours and minutes. Ciao. Giuseppe
Hi @phanikumarcs , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Satyapv, let me understand: for each TraceNumber you can have Error="yes" (or something else) or Exception="yes" (or something else) and    ReturnCode="yes" (or something else).You want in a tab... See more...
Hi @Satyapv, let me understand: for each TraceNumber you can have Error="yes" (or something else) or Exception="yes" (or something else) and    ReturnCode="yes" (or something else).You want in a table the TraceNumber and in different columns Error, Exception and ReturnCode ="yes" if there's something or "NO" if there's nothing, is it correct? In this case, you have to use the fillnull command to give the values when there's no value, something like this: index=Application123 TraceNumber=* | eval Error=if(Error="*","YES","NO"), Exception=if(Exception="*","YES","NO"), ReturnCode=if(ReturnCode="*","YES","NO") | table TraceNumber Error Exception ReturnCode It's not clear fom me if the  Error, Exception and ReturnCode fields are already extracted or not, if not, please share some sample so I can help you inextraction. Ciao. Giuseppe
When we create a notable, we want to use certain fields such as source IP and destination IP,   When I create the rule and add these fields as $src$ and $dest$ in enterprise security 7.0.0 it works... See more...
When we create a notable, we want to use certain fields such as source IP and destination IP,   When I create the rule and add these fields as $src$ and $dest$ in enterprise security 7.0.0 it works, but in 7.3.0 it does not show any result.  
Hi @richgalloway  How to consolidate Thousand Eyes into Splunk to centralize alerts on the dashboard? Please help me for the above question.. Thanks
Hello All,   I have an Index = Application123 and it contains an Unique ID known as TraceNumber. For each Trace number we have Error's, Exceptions and return codes.   We have a requirements to su... See more...
Hello All,   I have an Index = Application123 and it contains an Unique ID known as TraceNumber. For each Trace number we have Error's, Exceptions and return codes.   We have a requirements to summarize in a table  Like below, If error is found in index need table value as YES if not found it should be No. Same for Exception if Exception is found then table should be Yes or else no. Note Error's, exceptions and retuncodes are in content of Index with field - Message log. TraceNumber   Error     Exception    ReturnCode 11111                  YES          NO                   YES 1234                     YES          NO                    YES Any help would be appreciated
Hi team, I mentioned that the payload field contains the entity-internal-id and lead-id in an array format. I want to print a separate event with one lead and one entity internal id present, and t... See more...
Hi team, I mentioned that the payload field contains the entity-internal-id and lead-id in an array format. I want to print a separate event with one lead and one entity internal id present, and the rest of the values will be printed in the next event, respectively. Kindly suggest here. correlation_id: ******** custom_attributes: { [-]      campaign-id: ****      campaign-name: ******      country:      entity-internal-id: [ [-]        12345678        87654321      ]      lead-id: [ [-]        11112222        33334444      ]      marketing-area: *****      record_count:      root-entity-id: 2 }
hello,  How to change the font size of y-values in a Splunk dashboard barchart?   I try..       <html>        <style>             #rk g[transform] text {             font-size:20px !important... See more...
hello,  How to change the font size of y-values in a Splunk dashboard barchart?   I try..       <html>        <style>             #rk g[transform] text {             font-size:20px !important;             font-weight: bold !important;             }             g.highcharts-axis.highcharts-xaxis text{             font-size:20px !important;             }             g.highcharts-axis.highcharts-yaxis text{             font-size:20px !important;             }         </style> </html>  
Our pro license has been expired and wanted to check on the procedure for the upgraded license file
Thanks  Working as expected .
I've two counter streams, I would like to display that as a percentage as B/(B+C)  in the chart but it always gives me an error.  B = data('prod.metrics.biz.l2_cache_miss', rollup='rate', ext... See more...
I've two counter streams, I would like to display that as a percentage as B/(B+C)  in the chart but it always gives me an error.  B = data('prod.metrics.biz.l2_cache_miss', rollup='rate', extrapolation='zero').publish(label='B') C = data('prod.metrics.biz.l2_cache_hit', rollup='rate', extrapolation='zero').publish(label='C') How can I create a new metrics out of these two to find either cache hit or miss percentage? 
Hi All, I am attempting to use lookup table "is_windows_system_file"  for the following SPL where the Processes.process_name needs to match the filename from the lookup table. Once these results are... See more...
Hi All, I am attempting to use lookup table "is_windows_system_file"  for the following SPL where the Processes.process_name needs to match the filename from the lookup table. Once these results are obtained I then want to attempt to see processes that are not running from C:\Windows\System32 or C:\Windows\SysWOW64    | tstats `summariesonly` count from datamodel=Endpoint.Processes where Processes.process_name=* by Processes.aid Processes.dest Processes.process_name Processes.process _time  
I think it's a right analysis Maybe, just for some tests i'll try playing with "max_fd" in limits to see how System works. Just to stress the System as said in first post, it's only a test ... See more...
I think it's a right analysis Maybe, just for some tests i'll try playing with "max_fd" in limits to see how System works. Just to stress the System as said in first post, it's only a test to understand better UF. I just saw how dangerous, sometimes, is to introduce the "..." or "*" or any other wildcard in path inputs, since UF could get crazy 🤷‍ like crcSalt, which could ingest x2/x3/x4/... data if not right blacklisted (think about log rotation with maybe gz/zip/bz extensions 🤷‍ ). Anyway, there's something else than only fd. In a stable environment, UF should "leave free" the file (and drop its fd) after "time_before_close" (5 by default), so can process other files in queue. Another strange situation, i can't see any WARN about fd in splunkd.log, as sometimes i saw in other situations log explicitly said the max_fd was raised now not! Strange! Maybe this behaviour occurs different on different distros, as should be a System problem, not directly related to UF work 🤷‍
@bowesmana @gcusello @ITWhisperer  Thanks for your Ideas for helping me. finally, I did it with the below addition, it worked what i desired results. | rename host as Server, Name as Message | ev... See more...
@bowesmana @gcusello @ITWhisperer  Thanks for your Ideas for helping me. finally, I did it with the below addition, it worked what i desired results. | rename host as Server, Name as Message | eval Severity=case( EventID="1068", "Warning", EventID="1", "Information", EventID="1021", "Warning", EventID="7011", "Warning", EventID="6006", "Warning", EventID="4227", "Warning", EventID="4231", "Warning", EventID="1069", "Critical", EventID="1205", "Critical", EventID="1254", "Critical", EventID="1282", "Critical") | fields Server, EventID, Message, Severity | search Severity="*$search$*" OR EventID="*$search$*" OR Server="*$search$*" OR Message="*$search$*" | table _time, Server, EventID, Message, Severity
Has anyone tried this integration, I am facing issues while integrating this using this app https://splunkbase.splunk.com/app/6535  . This add-on only pulls the activity once from our TFS server  and... See more...
Has anyone tried this integration, I am facing issues while integrating this using this app https://splunkbase.splunk.com/app/6535  . This add-on only pulls the activity once from our TFS server  and does not pull it continuously at said interval. No errors observed in the internal  logs. Has any one tried using this add-on for this integration? Azure DevOps (Git Activity) - Technical Add-On