All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,  Before i post here i have tried everything under https://community.splunk.com/t5/Splunk-Search/How-to-join-2-indexes/m-p/560334  but couldnt figure out my search.  Index01 contains  field... See more...
Hi All,  Before i post here i have tried everything under https://community.splunk.com/t5/Splunk-Search/How-to-join-2-indexes/m-p/560334  but couldnt figure out my search.  Index01 contains  fields of interest as follows :  host, hostname, agent_version,agent_date The difference between host & hostname fields is host contains  name of HF server (which i dont want to correlate)  while hostname contains the list of device names  (which i want to correlate with Index02).  In Index02., the fields of interest are:  host (default field), _time (default field)  To summarize, the field hostname from Index01 matches the values of the field host from Index02 . So this is the common denominator. Requirement is for all the devices from index01,  find  out the latest time stamp (as in when the device last logged) from Index02.    Below is what i need to achieve: hostname(Index01) agent_date (Index01) agent_version (Index01) LastSeen (Index02) xxx xxx xxx xxxx         Have tried below 2 queries but no luck.   It shows 0 results found.  But if i run the search individually they show data.     index=index01 | rex field=dns "(?P<hostname>[a-zA-Z0-9-]+)." | dedup hostname [ search index=Index02 | stats latest(_time) as lastSeen_epoch BY host | eval LastSeen=strftime(lastSeen_epoch,"%m/%d/%y %H:%M:%S") | fields host LastSeen ] | table hostname agent_date agent_version LastSeen OR index=index01 [ search index=Index02 | stats latest(_time) as lastSeen_epoch BY host | eval LastSeen=strftime(lastSeen_epoch,"%m/%d/%y %H:%M:%S") | fields host LastSeen ] | rex field=dns "(?P<hostname>[a-zA-Z0-9-]+)." | dedup hostname | table hostname agent_date agent_version LastSeen          
Hi, I have an inputlookup with wSender, wSubject and wRecipient. I want to whitelist some of the emails sent by an user to a specific recipient that have a specific subject. How can I whitelist bas... See more...
Hi, I have an inputlookup with wSender, wSubject and wRecipient. I want to whitelist some of the emails sent by an user to a specific recipient that have a specific subject. How can I whitelist based on this 3 conditions (Sender=X, Subject=Y, Recipient=Z) ? I've tried: where Sender!=wSender AND Subject!=wSubject AND Recipient!=wRecipient but in this case all the email sent by wSender are whitelisted. Also tried index=xxx AND NOT | inputlookup whitelist.csv fields wSender, wSubject, wRecipient - but the same result, the user from wSender is getting whitelisted for all the emails he sent not just the ones from wSubject.
I've done a simple search like this: index=fw_cisco | stats dc(dest_ip) as NrDestIp by src_ip I have defined a lookup file (ip_lookup) which has two colums: IPHost and DNShost. How do I replace the... See more...
I've done a simple search like this: index=fw_cisco | stats dc(dest_ip) as NrDestIp by src_ip I have defined a lookup file (ip_lookup) which has two colums: IPHost and DNShost. How do I replace the values of src_ip with the corresponding values of the lookup table? I tried this index=fw_cisco | lookup ip_lookup IPHost as src_ip OUTPUT DNSHost as resolved_src | stats dc(dest_ip) as NrDestIp by src_ip, resolved_src But it creates two columns, and also misses the values of src_ip that dont have a matching IPHost in the lookup table.
Let me be more clear: I have defined a lookup file (ip_lookup) which has two colums: IPHost and DNShost Now I have a search which has two fields, src_ip and dest_ip. I successfully created a new fie... See more...
Let me be more clear: I have defined a lookup file (ip_lookup) which has two colums: IPHost and DNShost Now I have a search which has two fields, src_ip and dest_ip. I successfully created a new field by using lookup like this: index=fw_cisco | lookup ip_lookup IPHost as src_ip OUTPUT DNShost as resolved_source_ip But I want to do the same for the field dest_ip too.  Doing lookup like this:  | lookup ip_lookup IPHost as src_ip, dest_ip ... throws an error How do I create two new fields that match the src_ip and dest_ip of my events, from the same lookup command
I am trying to figure out a way to calculate a field in a set of data. In my search im returned events from a long list of computers. For lack of a better explanation, I have events that essentially ... See more...
I am trying to figure out a way to calculate a field in a set of data. In my search im returned events from a long list of computers. For lack of a better explanation, I have events that essentially each computer will throw once a day at the same time every day. I will have logs that have fields ComputerName, and ComputerValue. Every day the ComputerValue will be a different numeric value. I need to create a new field in each log that will be the difference between the ComputerValue field. So if day 1, Computer1 gives ComputerValue 10, and day 2 Computer1 gives ComputerValue 12, I need to at search time add a field to Computer1 that would be day 2 value minus day 1 value positive or negative. So day 2 will also have a value ComputerDifference of 2. and if day 3 computerValue is 8, it would be ComputerValue of day 2 minus day 3 and ComputerDifference would be -4. Its something I could easily do in Excel but I cant figure out a way to do it here. Any suggestions? 
Hello, From the GUI (DB Input), it seems that Splunk is unable to detect any Rising Column due to our sub query:     SELECT event_time FROM sys.fn_get_audit_file ( (SELECT TOP(1) e.audit_fi... See more...
Hello, From the GUI (DB Input), it seems that Splunk is unable to detect any Rising Column due to our sub query:     SELECT event_time FROM sys.fn_get_audit_file ( (SELECT TOP(1) e.audit_file_path FROM [sys.dm_server_audit_status] e WHERE e.name = 'Audit-select-statement'), default, default) WHERE event_time > ? ORDER BY event_time ASC       unfortunately, Splunk DB Connect is unable to detect any rising column. If I remove the SELECT TOP(1), the rising column appear again. The goal is to query the audit table with the current filename. I saw another discussion (https://community.splunk.com/t5/Splunk-Search/DB-Connect-rising-column-combination-of-two-columns/m-p/121434) but seems the enhancement request (DBX-564) is still not ready. Would anyone happen to have the same issue ? Kind Regards,  
Hi all, I am trying to use the screenshot machine API to get an The image is not displayed properly. We have confirmed that the API is working properly and The execution history shows the file ... See more...
Hi all, I am trying to use the screenshot machine API to get an The image is not displayed properly. We have confirmed that the API is working properly and The execution history shows the file size and the path to the vault file, so we believe that the screenshot was taken successfully. Does anyone know the cause? 日本語訳 お世話になります。 screenshot machineのAPIを利用し、 画像を取得しても、うまく表示がされません。 APIのテストは成功しており、アクションの詳細を確認しても、 ファイルサイズとvaultファイルのパスが 確認できるため、スクリーンショットの取得は成功していると思われます。 どなたか原因をご存知の方はいらっしゃいますか?
Hello all, I have been using Splunk's classic dashboard for a while now and I switched to dashboard studio, one thing I can't seem to figure out is when I need to have different values of labels fr... See more...
Hello all, I have been using Splunk's classic dashboard for a while now and I switched to dashboard studio, one thing I can't seem to figure out is when I need to have different values of labels from the value actually used by the token. Say the data source of a dropdown is an SQL query with 2 columns, one I want to use as the label (aka value displayed to the user), the other as the token's actual value, is that possible? If it helps in classic dashboard it is done via     <fieldForLabel>name</fieldForLabel> <fieldForValue>age</fieldForValue>       Any ideas? Thank you very much!  
Hi, Apologies if the subject is a bit vague but I would like to know if there is a way to check overall Events Per Second ingestion? Is it through the Monitoring Console? Thank you in advance. Mik... See more...
Hi, Apologies if the subject is a bit vague but I would like to know if there is a way to check overall Events Per Second ingestion? Is it through the Monitoring Console? Thank you in advance. Mikhael
Hi all, I would like to ask this.   So for example I assigned app1 and app2 into a server class. How can I find out the index that app1 and app2 use.   I know the alternative way is to look thro... See more...
Hi all, I would like to ask this.   So for example I assigned app1 and app2 into a server class. How can I find out the index that app1 and app2 use.   I know the alternative way is to look through the file system > etc > Deployment App > local  > index=test But is there a way to find out the index, without going into the machine file system or CLI.   Thank you for any help provided.
Hi, I'm trying to convert my classic dashboard to studio, and one of my bar charts is in trellis layout. For context this is my code: index | search tenant = "*" | dedup id, tenant | stats count ... See more...
Hi, I'm trying to convert my classic dashboard to studio, and one of my bar charts is in trellis layout. For context this is my code: index | search tenant = "*" | dedup id, tenant | stats count as total_devices by tenant | join type=left [index | dedup id, tenant | where time > (now()-86400) | stats count as available_devices by tenant] | eval available_devices = if(isnull(available_devices) OR available_devices="", "0", available_devices) | eval unavailable_devices = total_devices - available_devices   Ideally, the trellis layout will separate the results by tenant. However, when I use the splitSeries option, it separates into total_devices, available_devices, and unavailable_devices instead. Thanks! Any help will be appreciated.
I have two events where in order to get a response time, I need to subtract the two timestamps. However, this needs to be grouped by "a_session_id" / "transaction_id." The two events I need are circl... See more...
I have two events where in order to get a response time, I need to subtract the two timestamps. However, this needs to be grouped by "a_session_id" / "transaction_id." The two events I need are circled in red in the screenshot attached. I need those two events out of the three events. Every "a_session_id" has these three logs. source="/apps/logs/event-aggregator/gateway_aggregator_events.log" is always after source="/logs/apigee/edge-message-processor/messagelogging/gateway-prod/production/Common-Log-V1/14/log_message/gateway.json" Please let me know if you need more information. Such as snippets on the SPL. Any assistance is much appreciated!
Hello I try to summarize the different steps to onboard automatically a csv file in Splunk 1) On the forwarder: - I need an inputs.conf to tell the forwarder what data to send. (And eventually pro... See more...
Hello I try to summarize the different steps to onboard automatically a csv file in Splunk 1) On the forwarder: - I need an inputs.conf to tell the forwarder what data to send. (And eventually props.conf) http://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf - I also need an outputs.conf to tell the forwarder where to send the data.  inputs.conf [monitor://C:\Program Files\SysCheck\Logs\*.txt] outputs.conf [tcpout:anyName] server=indexer.myco.com:9997 http://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf   2) On the indexer I need to configure the receiving port on http://docs.splunk.com/Documentation/Splunk/6.2.1/Forwarding/Enableareceiver Is it correct? Thanks
Hi Wondering if someone can assist, Want to Implement and test DHCP spoofing and ARP poisoning detection/alerting using Splunk enterprise as SIEM. thank you
Hello, how to remove and clean corrupt peer (indexer) from cluster? Should we stop it then after hardware maintenance delete all indexes data directories then start it again to resync whole data ... See more...
Hello, how to remove and clean corrupt peer (indexer) from cluster? Should we stop it then after hardware maintenance delete all indexes data directories then start it again to resync whole data from other peers? Thanks!  
I am trying to UPGRADE using Ansible, I kick off the playbook via the bastion host. Here are the tasks. 1. copy the install file to remote 2. stop the Splunk service 3. install Splunk Forwarder... See more...
I am trying to UPGRADE using Ansible, I kick off the playbook via the bastion host. Here are the tasks. 1. copy the install file to remote 2. stop the Splunk service 3. install Splunk Forwarder 9.1 4. reboot 5. start the Splunk service All are fine until step 4. I ssh to the specific host and checked the status. It was not running. I scratched my head and tried something like below. sudo to root /opt/splunkforwarder/bin/splunk version it prompted for the license & perform upgrade message. I typed Y for both options. after a few minutes (it showed a message to disable boot start), returned to prompt. disabled boot start reboot sudo systemctl start splunk Finally, it's up & running.  How do I fix step 5. I have 100s of ec2 instances to upgrade.  
I am using a normal pie chart to visualise data on Splunk Dashboard. I would really love if I can add a donut pie chart instead of normal pie chart. Is it possible to add it on Splunk?
I have a log which looks like follow:   Request received :: Id assigned. --- Id=1, BODY={"userIds":["11"],"email":"test@test.com,"Client":"Test"}   The userids will always contains one element ... See more...
I have a log which looks like follow:   Request received :: Id assigned. --- Id=1, BODY={"userIds":["11"],"email":"test@test.com,"Client":"Test"}   The userids will always contains one element in the list surrounded by square brackets. So from above request I want to get 11. I am using rex to extract userID but seems that its not working.   index=prod-* sourcetype="kube:service" "Request received " | rex field=_raw "userIds\":\[\"(?<user_id>\d+)\"" |table user_id   But table is getting printed empty
Hi Team my data across multiple  indices looks  like this from  latest index to oldest oldest                   latest       Index 1         index 2... See more...
Hi Team my data across multiple  indices looks  like this from  latest index to oldest oldest                   latest       Index 1         index 2          index  3       par  lkg target workweek   par  lkg target workweek   par  lkg target workweek a 1 8 ww1   a 5 8 ww2   a 4 8 ww3 b 2 9 ww1   b 6 9 ww2   b 5 9 ww3 c 3 7 ww1   c 7 7 ww2   c 8 7 ww3 d 4 6 ww1   d 8 6 ww2   d 2 6 ww3   I want to recreate data like this par Target ww1 ww2 ww3 line chart a 8 1 5 4   b 9 2 6 5   c 7 3 7 8   d 6 4 8 2     The major catch  is  .... we do not know  how  many indices are there .....we do no know how many par are there in any index and how to automate splunk to create  line chart for each of these par showing  lkg trend across the workweeks. @Richfez  @richgalloway  @ITWhisperer  @aljohnson_splun @PickleRick autom
I have a field with data like this: loggingObject.methodName="WXYX.MNOController.myMethodName". loggingObject.methodName="DEF.GHI.TUVController.myMethodName2"   I want to extract just the myM... See more...
I have a field with data like this: loggingObject.methodName="WXYX.MNOController.myMethodName". loggingObject.methodName="DEF.GHI.TUVController.myMethodName2"   I want to extract just the myMethodName part.  If the dot before it is there, that is fine. I tried using the reg ex field extractor, this is what it came up with: ^(?:[^\.\n]*\.){9}(?P<methodName>\w+) But it seems like it's creating a name for the extracted field, "methodName". I then tried to use it my query like this:   | regex methodName="^(?:[^\.\n]*\.){9}(?P<methodName>\w+)"   But it doesn't work.  There also isn't anything in that line that tells it to extract from the loggingObject.methodName field specifically. How can I extract what I'm trying to extract?