All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Here are some good articles about upgrading splunk: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform https://community.splunk.com/t5... See more...
Hi Here are some good articles about upgrading splunk: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform https://community.splunk.com/t5/Installation/What-s-the-order-of-operations-for-upgrading-Splunk-Enterprise/td-p/408003 r. Ismo
Hi @claudiaG , probably my hint will not cover your requisite, but using a search with three joins, you'll wait for hours. Did you tried to correlate events using stats? see my approach and try to... See more...
Hi @claudiaG , probably my hint will not cover your requisite, but using a search with three joins, you'll wait for hours. Did you tried to correlate events using stats? see my approach and try to adapt it to your Use Case, remembering that Splunk isn't a DB. something like this: index=A | rename Name as TargetName | bin span=1w@w0 _time | stats values(Status) AS Status dc(Status) AS Status_count values(SourceID) AS SourceID values(type) AS type BY TargetID _time | eval state=case( Status_count=1, Status, match(status,"Done") OR match(status,"Pending"), "Link + State is there", NOT match(status,"Done") OR NOT match(status,"Pending"), "State is missing", 1=1, "No Lynk") | timechart span=1w@w0 count by state Ciao. Giuseppe
Hi I have the use case that i need to find some direct links between different events of the same index and sourcetype. The result should show me three different bars: bar 1: count of the existing... See more...
Hi I have the use case that i need to find some direct links between different events of the same index and sourcetype. The result should show me three different bars: bar 1: count of the existing links (incl. filter criteria matching) bar 2: count of the existing links where filter criteria dont match bar 3: count of the events where there is no existing link at all I came so far to make use of the "left join" to not loose the "not matching" events but now I dont know how to differiance them into a bar diagram or with an if condition to count them. It needs to be counted weekly. Can you help me please? This is my current query state: index=A | rename Name as TargetName | join type=left max=0 TargetName    [ search index=A    | fields TargetName ID Status] | join type=left SourceID    [ search index=A    | fields SourceID, type] | join type=left TargetID    [ search index=A    | fields TargetID] | bin span=1w@w0 _time | eval state=if(match(status,"Done") OR match(status,"Pending"), "Link + State is there", if (NOT match(status,"Done") OR NOT match(status,"Pending"), "State is missing", "No Link")) | dedup ID _time sortby -state | timechart span=1w@w0 count by state Somehow I can not make it work to get all "non matching" aka. the "No Link" events. Is the "if" the right way to get what I need? Do i need to add another "eval" within each join? And if yes, how to do that? Thank you for every help! This should be my result (see screenshot).
Hi @sbollam , you have to create three cascade dropdown list, concatenated uing the tokens of the previous ones, something like this: <fieldset submitButton="false"> <input type="dropdown" to... See more...
Hi @sbollam , you have to create three cascade dropdown list, concatenated uing the tokens of the previous ones, something like this: <fieldset submitButton="false"> <input type="dropdown" token="application"> <label>Application</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>Application</fieldForLabel> <fieldForValue>Application</fieldForValue> <search> <query> | inputlookup my_lookup.csv | dedup Application | sort Application | table Application </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="environment"> <label>Environment</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>Environment</fieldForLabel> <fieldForValue>Environment</fieldForValue> <search> <query> | inputlookup my_lookup.csv WHERE Application=$application$ | dedup Environment | sort Environment | table Environment </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="index"> <label>index</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search> <query> | inputlookup my_lookup.csv WHERE Application=$application$ AND index=$index$ | dedup index | sort index | table index </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> Ciao. Giuseppe
Hi @yuanliu ,    Sorry for the mistake!    `macros1(`$macros2$`,  now(), -15d@d, *, virus, *, *, *)` The values to be passed in macros2 is multiselect, I get error if i passed two values at a ... See more...
Hi @yuanliu ,    Sorry for the mistake!    `macros1(`$macros2$`,  now(), -15d@d, *, virus, *, *, *)` The values to be passed in macros2 is multiselect, I get error if i passed two values at a time, because each values passed is an individual macros which has different search in it. I need OR condition to be performed on that case. Thanks in Advance! Manoj Kumar S    
Hi @bwhite , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @bwhite , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello All, I have a requirement on the dropdowns, I have a following lookup file which contains application, environment and index details, I need to get the environment details related to each appl... See more...
Hello All, I have a requirement on the dropdowns, I have a following lookup file which contains application, environment and index details, I need to get the environment details related to each application when i choose app details from the dropdown, similarly with the index dropdown, it must only give the index details based on the values that i choose in the application and environment dropdowns. I could get the desired results while using the lookup file. But how can this be achieved using eval condition in the splunk dashboard rather than using the lookup file. I have the values of the fields in the splunk results. application environment index app_a DEV aws-app_a_npd app_a PPR aws-app_a_ppr app_a TEST aws-app_a_test app_a SUP aws-app_a_sup app_a PROD aws-app_a_prod app_b NPD aws-app_b_npd app_b SUP aws-app_b_sup app_b PROD aws-app_b_prod app_c NPD aws-app_c_npd app_c SUP aws-app_c_sup app_c PROD aws-app_c_prod
Hi peeps, I receive below error while running a query. below is my query; eventtype=sfdc-login-history | iplocation allfields=true SourceIp | eval cur_t=_time | streamstats current=t win... See more...
Hi peeps, I receive below error while running a query. below is my query; eventtype=sfdc-login-history | iplocation allfields=true SourceIp | eval cur_t=_time | streamstats current=t window=2 first(lat) as prev_lat first(lon) as prev_lon first(cur_t) as prev_t by Username | eval time_diff=cur_t - prev_t | distance outputField=distance inputFieldlat1=lat inputFieldLat2=prev_lat inputfieldLon1=lon inputFieldLon2=prev_lon | eval time_diff=-1*time_diff | eval ratio = distances3600/time_diff | where ratio> 500  | geostats latfield=lat longfield=lon count by Application  
Hi Team, We installed the splunk with version 8.0.4 from scratch and created the clusterlogging and clusterlogforwarder instance with vector pointing to splunk vm. Still we are unable to see the lo... See more...
Hi Team, We installed the splunk with version 8.0.4 from scratch and created the clusterlogging and clusterlogforwarder instance with vector pointing to splunk vm. Still we are unable to see the logs in the dashboard even sample logs are also not visible in the dashboard.   Regards, Guru Sairam
Thanks for the reply. I did finally get back to this issue. I checked and noticed that the execute permissions were missing from the scripts as you mentioned. rw-rw-rw- Adding those permissions h... See more...
Thanks for the reply. I did finally get back to this issue. I checked and noticed that the execute permissions were missing from the scripts as you mentioned. rw-rw-rw- Adding those permissions helped but something else was still missing that I never found. I finally solved it by downloading it directly to the server and expanding it there instead of downloading it and unzipping it on my machine first. Everything magically started working. Hope that helps, Brad.
Hello Everyone, I am just bringing up splunk within our environment, so a lot of functions are still new. I am trying to use my windows event data to update users ID on panorama, however, running t... See more...
Hello Everyone, I am just bringing up splunk within our environment, so a lot of functions are still new. I am trying to use my windows event data to update users ID on panorama, however, running the below query in my es environment returns the error : External search command 'panuserupdate' returned error code 2. Script output = "ERROR Unable to get apikey from firewall: local variable 'username' referenced before assignment ". The Query index=wineventlog host=xxxxxx | mvexpand Security_ID | mvexpand Source_Network_Address | dedup Security_ID Source_Network_Address | search Security_ID!="NULL SID" | rename Security_ID as user | rename Source_Network_Address as src_ip | panuserupdate panorama=x.x.x.x serial=000000000000 | fields user src_ip Brief overview of my data ingestion: Panorama syslog is ingested to splunk cloud through Heavy forwarder. Palo Alto Add on for Splunk is installed on both HF and Splunk Cloud also, but no data is showing on the app. Every data is 0 0.  Also I do have a user account in Panorama with api permissions.
The upgrade sequence is Managers, Search Heads, Indexers, Forwarders.  Each layer must be at the same or higher version than the next layer.  Note that you may have to go through the sequence a few t... See more...
The upgrade sequence is Managers, Search Heads, Indexers, Forwarders.  Each layer must be at the same or higher version than the next layer.  Note that you may have to go through the sequence a few times to get everything up to the newest version while honoring step levels.
Thanks @richgalloway I'm aware that best practice is indexers are same or later than forwarders (https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjR1urKjfmBAx... See more...
Thanks @richgalloway I'm aware that best practice is indexers are same or later than forwarders (https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjR1urKjfmBAxV3l2oFHZgqAmAQFnoECBsQAQ&url=https%3A%2F%2Fdocs.splunk.com%2FDocumentation%2FVersionCompatibility%2Fcurrent%2FMatrix%2FCompatibilitybetweenforwardersandindexers&usg=AOvVaw1VxTlqkTc48Sr-quUpc-0s&opi=89978449)   While I won't leave it this way, would that mean I could leave forwarders at 6.6 while I do the upgrades on the indexers (to 9.1.1) in my environment?  And then when I have time I'll upgrade the forwarders?  I had in my mind that I would have to upgrade forwarders incrementally while I upgrade the indexer, but seems like that isn't the case.
Thanks @hoangs.  I did read that in some documentation but unfortunately I do not have those switches when I go to edit my dashboards or panels.
I can login there (using my instructor account), but see 0 events (both public and invited), any way to get the access there ? Thanks @gcusello !
Looks great, will test it, thx !
Hello Team, Pre staging environment (not production), a single server with 12 CPU + 24 GB or memory + raid0 nvme (2.5GB/s write, 5GB/s read). All in one deployment (SH + indexer). CPU cores with HT ... See more...
Hello Team, Pre staging environment (not production), a single server with 12 CPU + 24 GB or memory + raid0 nvme (2.5GB/s write, 5GB/s read). All in one deployment (SH + indexer). CPU cores with HT on dedicated server (6 cores with HT = 12 CPU -> but not used by any other VM). Splunk 9.1.1 and ES 7.1.1. Fresh install. NO data ingested (0 events in most of the indexes including main, notable, risk etc...) - so basically no data yet to be processed. Default ES configuration, i have not yet tuned any correlation searches etc. Defaults. And already performance problems: 1. MC Scheduler Activity Instance showing 22% skipped. 2. ESX reporting minimal CPU usage (the same with memory): 3. MC showing more details, many different Accelerated DM tasks are skipped, all the time: Questions: 1. obviously the first recommendation would be to disable many of correlation searches/accelerated DMs, but that not what i would like do because the aim is to test complete ES functionality (by generating a small number of different types of events). Why do i have those problems in a first place ? I can see that all the tasks are very short, finishes in 1 second, just few takes several seconds. And that is expected since i have 0 events everywhere and i do always expect to have a small number of events on this test deployment. What should i do to tune it and make sure there are no problems with skipped jobs ? Shall i increase  max_searches_per_cpu base_max_searches  Any other ideas ? Overall that seems weird, 
You can use BOTS dataset https://github.com/splunk/botsv3
@richgalloway  Thanks, As I see in some host changes has not reflected what could be the issue ?
Hi @MichalG1, have you access to Splunk Show (show.splunk.com)? if yes, you already have a complete environment for test and training with all the installed add-ons and data. Otherwise, it's reall... See more...
Hi @MichalG1, have you access to Splunk Show (show.splunk.com)? if yes, you already have a complete environment for test and training with all the installed add-ons and data. Otherwise, it's really difficoult to create a relevant data test environment. Ciao. Giuseppe