All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good Day, I am working with the following: Through an indexer I have Three fields of interest, Field A, Field B, and Field C.  I also have a CSV file as a lookup file with the same fields which I w... See more...
Good Day, I am working with the following: Through an indexer I have Three fields of interest, Field A, Field B, and Field C.  I also have a CSV file as a lookup file with the same fields which I will call Field A1, Field B1, and Field C1.  The CSV File also has two further fields of interest that I will call Fields D1 and E1. I have been trying to do a search string where if the Field A, B and C equals Fields A1,B1,and C1 in the CSV that Fields D1 and E1 will be added to the event results. I tried the following  (basic indexer search string to get Fields A,B and C|lookup CSV FieldA1 FieldA2 FieldA3|where FieldA and FieldB and FieldC = FieldA1 and FieldB1 and FieldC1| (I have tried append, output etc... with either nothing coming up or errors).   I know I am missing something (and it is probably something simple). Any assistance would be appreciated. Thanks, Ed
Hi, I am trying to use transaction command where I need to get the data from 2 specific events with different sourcetypes. These 2 events are      2020-07-09 12:50:09,918 id="id:1234" event=test_a... See more...
Hi, I am trying to use transaction command where I need to get the data from 2 specific events with different sourcetypes. These 2 events are      2020-07-09 12:50:09,918 id="id:1234" event=test_attempt app= connid=myapp status=inprogress responsetime=7 inmessagetype="Request" 2020-07-09 12:50:09,105 id:1234 INFO [org.test.validator.MediaValidator] in MediaValidator VERIFIED user=test@gmail.com found and match     The 1st event could occur multiple times in the same transaction and that's why sometimes my transaction command is just grouping those 2 events and that's it. The common attribute in both of these events are id on which I am using transaction like below but it doesn't seem to be working. Sometimes I am only getting single event and sometimes the 1st event is grouped together  Can someone advice what could be done here to achieve this task. I can't use join because of the subsearch limitation and I have to use the transaction.     index=myindex (sourcetype="server_log" "[org.test.validator.MediaValidator] in MediaValidator) OR (source="/app/log/splunk-audit.log" event=test_attempt inmessagetype="Request" sourcetype=audit_log) | rex field=_raw "id\=\"id\:(?<id>[^\"]+)" | rex field=_raw "id\:(?<id>[^\s]+)" | transaction id keepevicted=true      @gcusello @to4kawa 
I am passing token "host_tok" from dashboardA to dashboardB Now I am trying to set the query based on token passed XML for Dashboard B as below: ================ <dashboard> <label>Consul Leve... See more...
I am passing token "host_tok" from dashboardA to dashboardB Now I am trying to set the query based on token passed XML for Dashboard B as below: ================ <dashboard> <label>Consul Level 2 Errors</label> <description>Gives stats on ERRORS</description> <row> <panel> <single> <title>Number of Errors in last 5 minutes</title> <search> <init> <condition match="$host_tok$==&quot;consul_client&quot;"> <set token="Panel1">host!=*consul* OR servername!=*consul* earliest=-5m sourcetype=consul_log index=hcm_consul "[ERROR]" NOT ("rpc error making call: rpc error making call: Permission denied" OR "rpc error making call: Permission denied" OR "Newer Consul version available") | eval SEARCH_CRITERIA=case(like(_raw, "%Push/Pull with%"), "Push/Pull Error", like(_raw, "%Failed fallback ping%"), "Failed fallback ping Error", like(_raw, "%connection reset by peer%"), "Connection reset by peer Error", like(_raw, "%keepalive timeout%"), "Keepalive Timeout Error", like(_raw, "%i/o timeout%"), "I/O Timeout Error", like(_raw, "%lead thread didn't get connection%"), "Lead thread didn't get connection Error", like(_raw, "%failed to get conn: EOF%"), "Failed to get conn: EOF Error", like(_raw, "%rpc error making call: EOF%"), "RPC error making call: EOF Error", like(_raw, "%Permission denied%"), "Permission denied Error", like(_raw, "%Timeout exceeded while awaiting headers%"), "Timeout exceeded while awaiting headers Error", true(), "Other Error")| stats count by SEARCH_CRITERIA</set> </condition> <condition match="$host_tok$==&quot;consul_server&quot;"> <set token="Panel1">host=*consul* OR servername=*consul* earliest=-5m sourcetype=consul_log index=hcm_consul "[ERROR]" NOT ("rpc error making call: rpc error making call: Permission denied" OR "rpc error making call: Permission denied" OR "Newer Consul version available") | eval SEARCH_CRITERIA=case(like(_raw, "%Push/Pull with%"), "Push/Pull Error", like(_raw, "%Failed fallback ping%"), "Failed fallback ping Error", like(_raw, "%connection reset by peer%"), "Connection reset by peer Error", like(_raw, "%keepalive timeout%"), "Keepalive Timeout Error", like(_raw, "%i/o timeout%"), "I/O Timeout Error", like(_raw, "%lead thread didn't get connection%"), "Lead thread didn't get connection Error", like(_raw, "%failed to get conn: EOF%"), "Failed to get conn: EOF Error", like(_raw, "%rpc error making call: EOF%"), "RPC error making call: EOF Error", like(_raw, "%Permission denied%"), "Permission denied Error", like(_raw, "%Timeout exceeded while awaiting headers%"), "Timeout exceeded while awaiting headers Error", true(), "Other Error")| stats count by SEARCH_CRITERIA</set> </condition> </init> <query>$Panel1$</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">cell</option> </single> </panel> </row> <row> <panel> <chart> <title>Error's Trendline for Nodes in last 60 minutes</title> <search> <query>$Panel2$</query> <earliest>-60m@m</earliest> <latest>now</latest> <refresh>1m</refresh> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">45</option> <option name="charting.chart">line</option> </chart> </panel> </row> </dashboard> ================ I am unable to get data its keep saying waiting for input. Can you suggest how I can achieve two different queries based on token passed?  
Hi  is there a way to prevent users sending reports to emails outside their organization?   Thanks
I'm trying to create a dashboard that displays a user name accounts that have received a password reset email.
hello, We have some tests running as part of our CI/CD pipeline. These go through different stages like preparation, execution, report, cleanup etc.. I need to create a time line for these stages pe... See more...
hello, We have some tests running as part of our CI/CD pipeline. These go through different stages like preparation, execution, report, cleanup etc.. I need to create a time line for these stages per test execution so that user can at any time check the timeline and find out at what stage is the test at. I have been able to chart duration for stages using :   search | rex field=source "job/test-stage-test/(?<build_number>.*)/console" | stats values(start_time) as startTime values(end_time) as endTime by stage, test, build_number | eval duration = (endTime-startTime)/1000 | chart avg(duration) by test, stage   So like the data above i have start time, end time for all stages for every test per build How do I create timeline for it with y axis as time and stages stacked per test per build on x-axis with bar starting at start time for test and each stage bar starting at its starting time and ending at its end time. 1____test 1 : stage1____   _______t1 :stage 2____  __t1:stage 3___ _____t2:stage1___  ___t3:stage1____ 2                 ____test1:stage1___  _______t1:stage2_____ ___________________________________________time______________________________________ Is it possible to do this kind of chart, I have tried multiple timechart options but non worked.  
Hey Guys, Using Strava for Splunk App v2.5.1  .. successfully installed and configured. Created a data input and Strava activities were being ingested into my index (dedicated index called strava). ... See more...
Hey Guys, Using Strava for Splunk App v2.5.1  .. successfully installed and configured. Created a data input and Strava activities were being ingested into my index (dedicated index called strava). All was well with the world. But then I was having some weird issues and decided to start from scratch. I deleted the TA and index and started again with a cleanly restarted splunk instance, free of any Strava for Splunk App config. This is what I did: splunk stop splunk clean eventdata -index strava rm -rf /opt/splunk/etc/apps/TA-strava-for-splunk rm -rf /opt/splunk/var/lib/splunk/strava.dat rm -rf /opt/splunk/var/lib/splunk/modinputs/strava_api rm -rf /opt/splunk/var/lib/splunk/strava rm -rf /opt/splunk/etc/users/admin/TA-strava-for-splunk rm -rf /opt/splunk/etc/users/splunkadm/TA-strava-for-splunk ( cd /opt/splunk/etc/apps; tar xvzf ~splunkadm/Downloads/strava-for-splunk_251.tgz ) splunk start Reconfigured the Strava TA, including the data input and the log shows a successfull connection to Strava; it checks to see if any new activities (at this point I have nothing in the strava index) and then nothing .. the log says "All done, looks like we've got all activities for XXX". Nothing appears in my clean empty strava index. Using DEBUG mode I see its asking Strava for anything new since the last activity (epoch time of the last event). So, somewhere in my system it has cached/remembered what the last activity was and now it wont ingest any activities.  How do I reset that???
We have 6 indexers , each 9 T also we have ~ 100 indexes with different retention time  we are indexing ~ 2 TB of data daily   all our indexers reached ~99% FS and we can't add more storage  I wo... See more...
We have 6 indexers , each 9 T also we have ~ 100 indexes with different retention time  we are indexing ~ 2 TB of data daily   all our indexers reached ~99% FS and we can't add more storage  I would like to set coldPath.maxDataSizeMB  since looks like we have some issue with deleting the cold buckets  Does it make sense to set coldPath.maxDataSizeMB = 5242880 (5 TB) ? how I can calculate the values  ?  I have this tool https://splunk-sizing.appspot.com/ but I can't use since I have different retention time for each index    
Hello experts, I am trying to create a custom macro, from that it will returns a result depends on the argument I pass to it, like this:   | makeresults | eval param=1 | eval result=case(param == ... See more...
Hello experts, I am trying to create a custom macro, from that it will returns a result depends on the argument I pass to it, like this:   | makeresults | eval param=1 | eval result=case(param == 1, "one", param == 2, "two", param == 3, "three", true(), "Invalid number") | table result   The above searching query works well if I copy whole query and paste to the search bar    | makeresults | eval param=$param$ | eval result=case(param == 1, "one", param == 2, "two", param == 3, "three", true(), "invalid input") | table result    But when I used as a macro   `getNumber(param=1)`   I got an error   Error in 'makeresults' command: This command must be the first command of a search.   How can I solve this issue? Basically this macro will be used in another macro.
Hi All, We are trying to get the memory usage of mvexpand command so that we can set the max_mem_usage_mb in the limits.conf correctly. based the explanation in https://docs.splunk.com/Documentat... See more...
Hi All, We are trying to get the memory usage of mvexpand command so that we can set the max_mem_usage_mb in the limits.conf correctly. based the explanation in https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Limitsconf, mvexpand should truncate result when the memory limit is hit. By using following search , I am able to get the search memory usage.       `dmc_set_index_introspection` host=SH1-stack360test01 sourcetype=splunk_resource_usage data.search_props.sid::* data.search_props.mode!=RT | `dmc_rename_introspection_fields` | stats max(elapsed) as runtime max(mem_used) as mem_used earliest(_time) as _time by sid, label, provenance, type, mode, app, role, search_head, user | eval mem_used = round(mem_used, 2) | sort 20 - mem_used | fields - day, hour, minute, second | eval _time = strftime(_time,"%+") | rename sid as SID, label as "Search Name", provenance AS Provenance, type as Type, mode as Mode, app as App, role as Role, search_head as "Search Head", user as User, mem_used as "Memory Usage (MB)", _time as Started, runtime as Runtime | fieldformat Runtime = `dmc_convert_runtime(Runtime)`       From the observation, mvexpand does not truncate the result when it is over the limit. (search memory usage hit 520 MB) Limit is set as of following       [default] max_mem_usage_mb = 512 [mvexpand] max_mem_usage_mb = 512       Is there a way for us to get the memory usage by mvexpand command in a scheduled search?  Thank you
Hello experts, I am looking for a solution to respond an incident using alert in Splunk. In stead of alert sending, I want to call to specific APIs when alerts is triggered. Kinds of ticket creating... See more...
Hello experts, I am looking for a solution to respond an incident using alert in Splunk. In stead of alert sending, I want to call to specific APIs when alerts is triggered. Kinds of ticket creating in Jira or message sending using third parties or server restarting when server out of memory, and so on. How can we implement it?
HI Splunkers,  I am looking for some help on loops in splunk. I have a lookup file like below. from,to parent,child1 parent,child2 parent,xyz child1,son1 child1,daughter1 child2,son2 child... See more...
HI Splunkers,  I am looking for some help on loops in splunk. I have a lookup file like below. from,to parent,child1 parent,child2 parent,xyz child1,son1 child1,daughter1 child2,son2 child2,daughter2 xyz,kid1 son1,kid1 son2,kid2 I want to know all those nodes which has some kind of direct relation (like a network path) to kid1.. the output should be kid1,son1,child1,parent,xyz kid1---> son1 son1 -->child1 child1--->parent kid1--> xyz xyz--> parent I have the below query which gives me the required output. but as you can see I ran the lookup file multiple times to get the path, but I dont want to do it. I want to have a loop till we reach the end as "Parent" for each node. I know SPLis not a programming language , but curious to know if this is possible. Please help. | makeresults | eval find="kid1" | lookup network.csv to AS find output from | mvexpand from | lookup network.csv to AS from output from AS from1 | mvexpand from1 | lookup network.csv to AS from1 output from AS from2 | mvexpand from2 | eval newField=find."-".from."-".from1 | makemv delim="-" newField | table newField | mvexpand newField | dedup newField | stats values(newField) As output
I have installed splunk universal forwarder on ec2 instances which forward data to splunk heavy forwarder. In order to route the date to Splunk on premise, I need to configure splunk heavy forwarder.... See more...
I have installed splunk universal forwarder on ec2 instances which forward data to splunk heavy forwarder. In order to route the date to Splunk on premise, I need to configure splunk heavy forwarder. I have configured heavy forwarder to route date to splunk cloud. How to configure the same to forward the data to on-premise. 
Hi @gcusello , We have installed Splunk DB Connect v1 and its java  bridge is running. Can you please guide what to do next.We are looking to  monitor oracle database. Regards, Rahul
Greetings, I'm setting up an alert and I noticed that for each alert trigger, only 1 of each trigger type is allowed.  For example, you can select 1 email and log 1 event.  But once I choose a certa... See more...
Greetings, I'm setting up an alert and I noticed that for each alert trigger, only 1 of each trigger type is allowed.  For example, you can select 1 email and log 1 event.  But once I choose a certain type, I can't choose it again.  What if I want the trigger to log n many events?  Is this allowed somewhere? Thanks in advance
Dear Splunk bro team, We are in trouble while send result of scheduled search to email. Our Scheduled search result have a field that indicate who get result. for example : field_A code email a ... See more...
Dear Splunk bro team, We are in trouble while send result of scheduled search to email. Our Scheduled search result have a field that indicate who get result. for example : field_A code email a 1 a@gmail.com b 2 b@gmail.com c 1 a@gmail.com d 2 b@gmail.com Currently, savedsearch are sending earch record is one email (total 4 emails to 2 receipent ). Expected (2 emails): + mail1 send content  to a@gmail.com in 1 file csv field_A code email a 1 a@gmail.com c 1 a@gmail.com + mail2 send content b@gmail.com in 1 file csv field_A code email b 2 b@gmail.com d 2 b@gmail.com          
  We connecting are Solace queue by using Splunk JMS_TA add-on and consuming data from queue but we getting exception too-many connection frequently. Please help on this. Error Details: ERROR Ex... See more...
  We connecting are Solace queue by using Splunk JMS_TA add-on and consuming data from queue but we getting exception too-many connection frequently. Please help on this. Error Details: ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/jms_ta/bin/jms.py" Caused by: ((Client name: XXXXX.com/22109/#037805d3 Local addr: XXXXX Remote addr: XXXX.XXX.com:55555) - ) com.solacesystems.jcsmp.JCSMPErrorResponseException: 503: Max Client Queue and Topic Endpoint Flow Exceeded 0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/jms_ta/bin/jms.py" Stanza jms://queue/sys:Q_XX_XX_XX_ERROR : Error connecting : javax.jms.JMSException: Error creating consumer - internal error (503: Max Client Queue and Topic Endpoint Flow Exceeded @baboonbones 
Hi all, When I was trying to setup a new connection in DBConnect 3.2, i got this error message java.sql.SQLException: Error while saving schema to file: java.io.FileNotFoundException  anyone knows... See more...
Hi all, When I was trying to setup a new connection in DBConnect 3.2, i got this error message java.sql.SQLException: Error while saving schema to file: java.io.FileNotFoundException  anyone knows how to fix it? Thanks
Beginner here, I'm trying to run a search on unique logins for a web-based application. The current logs, however, do not indicate the information I need to be able to count which app the user logged... See more...
Beginner here, I'm trying to run a search on unique logins for a web-based application. The current logs, however, do not indicate the information I need to be able to count which app the user logged into.  It may be easier to illustrate the search: What I am trying to archive is on the _time value all those events (hidden) are triggered at the exact same time. I want to use that value as a unique ID to evaluate all the events that happened at that time as a group.  The information I require is from a_app Could some explain to me a way to archive this? I guess in summary if the UserAuthicationQuery had an actual log that identified what the user was logging into it would then work but the a_app for this process is done in a central location and not associated with the actual app the user is entering.
Hi Splunk Community We have created few whitelist in our inputs.conf file. It was all fine until i try to enter the following:   whitelist10=Type="Information" SourceName="Customer.Service" Messag... See more...
Hi Splunk Community We have created few whitelist in our inputs.conf file. It was all fine until i try to enter the following:   whitelist10=Type="Information" SourceName="Customer.Service" Message="*Request Info:ContactCustomer - CreateNewContact*"   after restarting my splunk, i get the following:   Invalid key in stanza [WinEventLog://Application] in C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf, line 38: whitelist12 (value: Type="Information" SourceName="Customer.Service" Message="*Request Info:ContactCustomer - CreateNewContact*").   and it gives me the following:   Did you mean 'whitelist'? Did you mean 'whitelist1'? Did you mean 'whitelist2'? Did you mean 'whitelist3'? Did you mean 'whitelist4'? Did you mean 'whitelist5'? Did you mean 'whitelist6'? Did you mean 'whitelist7'? Did you mean 'whitelist8'? Did you mean 'whitelist9'?   Is there a limit to number of whitelist we can create? Or what is the next correct key to use after whitelist9 ?   thanks!! Azrad