All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello!  Previously used Qradar, now used Splunk, that would not support two systems need to download all the logs from Qradar. Please tell me what solutions are there, pick up all the logs?
The following query just gives me results but I also need to count by each Group. index=Container_ship action=Decision result=* | spath "Groups{}" | search "Groups{}"=Sedan* OR "Groups{}"=SUV* | ... See more...
The following query just gives me results but I also need to count by each Group. index=Container_ship action=Decision result=* | spath "Groups{}" | search "Groups{}"=Sedan* OR "Groups{}"=SUV* | dedup invoice | timechart span=1mon count by result   Results show Yes or No but I also need to count by the Groups which has more than Sedan or SUV listed like color but I also only want to count whatever name appears first in the group entry.   Thanks!
Hi Splunkers,   I am looking to display the data Product 1 Seconds                    Cumulative response %           running average            Volume of transactions <4.5 seconds <5.5 seconds... See more...
Hi Splunkers,   I am looking to display the data Product 1 Seconds                    Cumulative response %           running average            Volume of transactions <4.5 seconds <5.5 seconds <7.5 seconds <25 seconds >=30 seconds           100 Based on the below post i actually wrote the same thing and it works till 10 sec but not the same way as listed https://community.splunk.com/t5/Splunk-Search/Grouping-by-numeric-range/m-p/27498   My query looks like .....Search Query..... | eval frontEndLatency=frontEndLatency/1000 | sort 0 frontEndLatency | eventstats count as total | eval in_range=round(case(frontEndLatency<30, floor(2*frontEndLatency)/2+.5, frontEndLatency<10, ceil(frontEndLatency), frontEndLatency>=30,30.0),1) | streamstats count as cnt avg(frontEndLatency) as run_avg | stats first(total) as total last(run_avg) as run_avg max(cnt) as count count as cnt by in_range,product | sort 0 in_range | eval range=if(frontEndLatency>=30, ">= 30.0 sec","< "+tostring(in_range)+" sec") | eval pct=round(count/total*100,1) | eval run_avg=round(run_avg,1) | rename cnt as "Volume of Transactions" pct as "**bleep**. response %" run_avg as "Running Avg" | dedup range | table range "**bleep**. response %" "Running Avg" "Volume of Transactions" | where range ="< 4.5 sec" OR range ="< 5.5 sec" OR range ="< 7.5 sec" OR range ="< 25.0 sec" OR range="< 30.0 sec" It gives me the output as range **bleep**. response % Running Avg Volume of Transactions < 4.5 sec 4.7 1.3 2 < 5.5 sec 7.3 1.7 10 < 7.5 sec 26.5 2.8 21 But it does not gives the same table and thus i tried changing  floor(4*frontEndLatency)/2+.5 or floor(8*frontEndLatency)/2+.5 and it gives me the table but wrong figures.   Kindly advise as I am unable to understand what exactly is happening here? Also I tried rangemap but its not working. Thanks, Amit    
Hi Team, We are ingesting Palo Alto Firewall logs into Splunk from our Syslog server. Hence we have made the Syslog server as a Heavy Forwarder as well. And in the Syslog Heavy Forwarder server we ... See more...
Hi Team, We are ingesting Palo Alto Firewall logs into Splunk from our Syslog server. Hence we have made the Syslog server as a Heavy Forwarder as well. And in the Syslog Heavy Forwarder server we have installed the "Splunk_TA_paloalto" Add-on and configured the inputs as "pan:firewall" so that based on the TA the data is getting segregated with different sourcetypes such as "pan:hipmatch" "pan:userid" , "pan:system" ,"pan:traffic" & "pan:threat".   So now we want to filter out the log ingestion from few of the sourcetypes such as "pan:hipmatch", "pan:userid" & "pan:system" alone. So how to filter those logs from Splunk before ingestion. Where should i need to place the props and transforms and what would be the props and transforms to filter those logs from those particular sourcetypes? Hence kindly help on my request.  
Now I enable correlation search, which is set "Notable" and "Run Phantom Playbook" as adaptive action. Then when logs are found, "Run Phantom Playbook" is run normally,  but "Notable" doesn't look ... See more...
Now I enable correlation search, which is set "Notable" and "Run Phantom Playbook" as adaptive action. Then when logs are found, "Run Phantom Playbook" is run normally,  but "Notable" doesn't look to be run and "Incident Review" doesn't show anything. I confirm the result of SPL "index=notable", it returns no result.   Please help.
Hello I have installed Splunk AWS app and Add-on in Splunk enterprise. I have configured AWS Add-on, it is showing me inputs. But my AWS app is not showing any data.
I have data that used to be in an if condition, the nameFromChannel is taken from slack, and they use the names as a sort of mechanism to filter the members that are allowed to be a part of the chann... See more...
I have data that used to be in an if condition, the nameFromChannel is taken from slack, and they use the names as a sort of mechanism to filter the members that are allowed to be a part of the channel.  The group credentials are then taken from all the members usernames and are assessed individually whether they're allowed to be a member of the group.  It goes something like this.     | eval clientName=if(like(nameFromChannel,"%B%"),groupCredentials+ " " +"BASSI",groupCredentials) | eval clientName=if(like(nameFromChannel,"%W%"),groupCredentials+ " " +"HI WALDORFI",groupCredentials) | eval clientName=if(like(nameFromChannel,"%V%"),groupCredentials+ " " +"VDWI",groupCredentials) ...     (So a channel that has xxx_BW_xxx in their name, means that employees with BASSI / HI / WALDORFI attached to their display names are allowed to be members). P.S. we cut the nameFromChannel before hand, so that the only data are the letters. After some time, we decided that we wanted to change this to a lookup, that had a csv that looked like this :     nameFromChannel, groupCredentials %B%, BASSI %W%, BASSI WALDORFI %V%, VDWI     I found a few responses in the below page. https://community.splunk.com/t5/Splunk-Search/splunk-lookup-like-match/m-p/219946 It was a lot of help when setting up the lookup, however, I noticed that the % symbols are not being recognized even after I added the WILDCARD(nameFromChannel) in the advanced options section of my lookup definition, so I changed them to *.   | lookup listOfCompaniesDefinition nameFromChannel OUTPUT groupCredentials | eval clientName=if(groupCredentials="",clientName,clientName+groupCredentials)   After testing above, it seems that it isn't evaluating the text properly, my result isn't being displayed the same way it used to. The channels are no longer being retrieved. Fairly new to splunk, so I would like to hear your feedback. Thank you!
Goal is to return a table that displays the Top 10  (md5) hashes in  recorded alerts received over a 60 days period.  So base search is :   index=cb eventtype=bit9_carbonblack_alert status=Resolve... See more...
Goal is to return a table that displays the Top 10  (md5) hashes in  recorded alerts received over a 60 days period.  So base search is :   index=cb eventtype=bit9_carbonblack_alert status=Resolved|top limit=10 md5   but for each returned result, id like to also show its filename and process_name   index=cb eventtype=bit9_carbonblack_alert status=Resolved |table md5 process_name observed_filename            
Hi, Is there a way to send the Query Parameters and possibly the headers to AppDynamics? Thanks
Please help. I just completed self learning fundamentals and already have a task I want to try, first post here so please be gentle :-). I have two files containing job run details for two different... See more...
Please help. I just completed self learning fundamentals and already have a task I want to try, first post here so please be gentle :-). I have two files containing job run details for two different jobs over 3 months. it also contains a julian date format from the mainframe but the event data is very similar. The jobs have a relationship in that on a particular day job_a(file1) is a prerequisite to job_b(file 2). in pseudo : I  want to calculate the difference between the time job_a started and the time job_b started for each day. Assumptions confirmed  : job_a is always earlier then job_b difference in hh:mm.  So this is the regex I used to convert the julian to gregorian while importing each of the files so I could use the event data as my _time and date in the index : timestamp format regex = %y%j%H.%M.%S.%N timestamp prefix regex = [A-Z][A-Z]\d\d\s\d\d Here is the sample records in each file : Delimited by space : fileds = Julian_date time jobname jobnr FILE1 : 2 sample records :  2021056 00.30.06.05 JOB_A 2021055 01.30.10.43 JOB_A FILE2 : 2 sample records :  2021056 03.30.23.50 JOB_B 2021055 02.00.10.43 JOB_B the output I would like to achieve is  :              DATE                  JOB_A_START       JOB_B_START        START_TIME_DIFF 2021-02-24                                 00H30                       03H30                                03H00 2021-02-23                                 01H30                       02H00                               00H30 I would really appreciate the approach thinking as well (i.e. why steps are done) because I found myself questioning even how I would approach the index and source and source-types because I ended up just coding a lot of SPL trying to get to something that looks like it will work(Eventually ended up deleting the index). I was very comfortable dealing each file individually creating graphs etc... but the minute the second one came in and I needed it as a "joint" output plus comparing the date fields and subtracting it in the event data etc..etc.. I realized I was now more confused than with a single file... I think if I can get the thought process of an experienced person it will really help(this is something I miss due to trying self learning ).  I hope the above is clearly articulated. Again, thanks in advance.
I'm having trouble writing a query which displays the action and host count where log count is below average on any host. The output would look something like this:   action host1 host2 host... See more...
I'm having trouble writing a query which displays the action and host count where log count is below average on any host. The output would look something like this:   action host1 host2 host3 host4 host5 host6 getdata 23404 22600 22592 88 22512 22244
I have created a CSV report, but I want it's name to be dynamic based on one of the values present in the report. I need to generate multiple report for different files processed by the system and n... See more...
I have created a CSV report, but I want it's name to be dynamic based on one of the values present in the report. I need to generate multiple report for different files processed by the system and name the report with the same file name.  This parameter is available in the report in one of the columns.
I am seeing duplicate events in a metrics index, help!   deployment flow: hec client--->load balancer--->HFs (hec receivers)--->Indexers (metrics index)
I think that I have configured all that I need to but when I try to resolve an address into lat/lon I get "REQUEST_DENIED".   Here is my search:       | stats count | eval Address_field="123 Bl... See more...
I think that I have configured all that I need to but when I try to resolve an address into lat/lon I get "REQUEST_DENIED".   Here is my search:       | stats count | eval Address_field="123 Blue Bird St,Opp,AL,United States" | printgeocode type=geocode address=Address_field​         My results:   password.conf   [credential::SplunkGoogleGeoCodeApiKey:] password = $7$8kXiAQ4Pkq9QNHVu9hSuycgpgvEOIUmblTI7g4Jfid60NASBhy5BiElwLkxZLPoB+9nyluNXgG3S5RTO8EzKa/HMVMAFlaQ=      Any help at this point would be appreciated.  Thanks in advance, Rcp
I have created a search which has multiple columns. One of the columns called status has color formatting. ALert = Red, Disiabled=Blue, Success=Green. The formatting works fine in the search howeve... See more...
I have created a search which has multiple columns. One of the columns called status has color formatting. ALert = Red, Disiabled=Blue, Success=Green. The formatting works fine in the search however when the email is received there is no coloring at all. Is it possible to add to coloring so it appears as it does in the search > report?   Also we are using splunk web not on premise. Thank you
Hey all, I'm pretty new for this so don't insult me for having a simple question  I recently deployed successfully the addon & app for Microsoft exchange, and I want to find information about c... See more...
Hey all, I'm pretty new for this so don't insult me for having a simple question  I recently deployed successfully the addon & app for Microsoft exchange, and I want to find information about client access over the WAN Our clients use outlook in order to access their Mailboxes [As far as I know they use HTTPS over RPC], but our WAN network is pretty busy, so I want to have information about clients accessing their mail from another site. Is it possible? Thanks a lot Tankwell
I'm taking over Splunk admin duties from a co-worker that has left the company.  We have a distributed environment setup of two heavy forwarders and four index servers all running  Splunk Enterprise ... See more...
I'm taking over Splunk admin duties from a co-worker that has left the company.  We have a distributed environment setup of two heavy forwarders and four index servers all running  Splunk Enterprise 7.3.  I'm fleshing out our upgrade task list for moving to 8.1.2.  What is the best way to manage the upgrade across all servers, and do I need to take any special steps to prevent loss of logs?  I have the task for the install itself, I need a game plan for how to do it across the 6 servers.  Will upgrading each server - one at a time, have it back up and running before I move to the next server - prevent log/data loss, or is it better practice to do all 6 servers at once (take them all down, upgrade them all, the bring them all back online).  Looking at the compatibility matrix the I should be able to do all the indexers first and have the heavy forwarders still be compatible.  Any advice on how to manage this is greatly appreciated.
Hi there,  Is there a way to create/display a dynamic dropdown using value selected from another dropdown? For example of what I'm trying to do:  Suppose I have 2 fields: Region and Server Name.... See more...
Hi there,  Is there a way to create/display a dynamic dropdown using value selected from another dropdown? For example of what I'm trying to do:  Suppose I have 2 fields: Region and Server Name.   Region  Server NY           A1                  B1                  X1                  Z1 LA            A2                  Y2                   Z2 TO          A3                 B3                  C3   So, the first drop-down gives the user option of selecting region : TO, NY or LA.  User selects the region: NY, then it should display a dropdown listing the name of NY's server names and when the user selects a NY server: B1, then it displays the panel based on search for server: S1 
In module 5, of Splunk Fundamentals 1, during the lab exercise, it asks to do a search and says to notice the host=web_server results and host=web_application, however,  there are no host=web_server ... See more...
In module 5, of Splunk Fundamentals 1, during the lab exercise, it asks to do a search and says to notice the host=web_server results and host=web_application, however,  there are no host=web_server results. So in the second search when they say put "port 22" in the search string, there are no results. I cannot finish the module. Any suggestions?
Hello I built an app that routes data to specific sourcetypes using transforms and regex while also trying to get the timestamping correct. Pretty basic setup: props.conf [ncipher] SHOULD_LINEMERG... See more...
Hello I built an app that routes data to specific sourcetypes using transforms and regex while also trying to get the timestamping correct. Pretty basic setup: props.conf [ncipher] SHOULD_LINEMERGE=false NO_BINARY_CHECK=true TRANSFORMS-sourcetye_routing = mySourcetype_ncipher_hardserver, mySourcetype_ncipher_hsglue [ncipher:hardserver] LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%d %H:%M:%S TIME_PREFIX = ]:\s category = Custom description = nCipher Timestamped Logs disabled = false pulldown_type = true [ncipher:hsglue] DATETIME_CONFIG = CURRENT LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom description = nCipher Bad timestamped logs get ingestion timestamp disabled = false pulldown_type = true transforms.conf [mySourcetype_ncipher_hardserver] DEST_KEY = MetaData:Sourcetype REGEX = \shardserver\[ FORMAT = sourcetype::ncipher:hardserver [mySourcetype_ncipher_hsglue] DEST_KEY = MetaData:Sourcetype REGEX = \shsglue\: FORMAT = sourcetype::ncipher:hsglue   data sample Feb 24 02:07:36 nethsm hardserver[1516]: 2021-02-24 02:07:36: nFast server: Notice: CreateClient (v1) pid: 17267, process name: /opt/nfast/bin/nfcp Feb 24 02:37:36 nethsm hardserver[1516]: 2021-02-24 02:37:36: nFast server: Notice: CreateClient (v1) pid: 18393, process name: /opt/nfast/bin/nfcp Feb 24 02:38:03 nethsm hsglue: warrant DC11-1AB2-3456 loaded Feb 24 02:39:30 nethsm hsglue: nohup: ignoring input Feb 24 02:40:37 nethsm hardserver[1516]: 2021-02-24 02:40:37: nFast server: Notice: CreateClient (v1) pid: 18394, process name: /opt/nfast/bin/nfcp Feb 24 02:41:30 nethsm hsglue: Started hardserver at pid 1516   What Im trying to accomplish is to send all of the records with "hardserver" which has well formatted timestamps in the records to go to ncipher:hardserver and the "hsmglue" records to go to ncipher:hsglue and get the CURRENT time as timestamp. On test ingestion the recordfs split into the correct sourcetype HOWEVER timestamping didnt work for either which Im trying to solve.   Any ideas what might be happening? Thanks for the thoughts!