All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I would like to extract more logs after searching for particular string. Eg., I want to search with string "Myname" and i want to see 3 lines along with search string in output Note :... See more...
Hi All, I would like to extract more logs after searching for particular string. Eg., I want to search with string "Myname" and i want to see 3 lines along with search string in output Note : 3 lines are not constant and keeps on changes abcdefghijklmnop 1234567890 Myname dsadasdasd 1231232131231 asdasdasdas   Expected result Myname dsadasdasd 1231232131231 asdasdasdas   dsasaasdsa
Hi, I am encountering issue with 1 particular index. I am unable to use index!= to exclude the results from that particular index. For example, I have 3 indexes - endpoint, server, mobile. I run a ... See more...
Hi, I am encountering issue with 1 particular index. I am unable to use index!= to exclude the results from that particular index. For example, I have 3 indexes - endpoint, server, mobile. I run a index=* index!=server index!=mobile [search parameters]. However, when the results came back, it is showing 2 indexes - endpoint and server. That means the index!=mobile works, but not the index!=server. And I did verify without the index!= command, I will see all 3 indexes. Of course this is a very simplified example with only 3 indexes but I am wondering, what could cause the index!=server not to work. In my current setup, all other indexes (I tested 10) work with index!= command but not that particular one. Thanks.
I have multiple UF (Universal Forwarder) in my environment and all of those are sending logs to one IF (Intermediate Forwarder). Now suddenly one UF has installed in syslog server and suddenly that... See more...
I have multiple UF (Universal Forwarder) in my environment and all of those are sending logs to one IF (Intermediate Forwarder). Now suddenly one UF has installed in syslog server and suddenly that UF stopped sending log to Splunk. How can I get to know when that UF last send the log to Splunk and if I try to search that UF name as host. Shall I get it or I will get only two IF name as host for every time?   [Note: Please attach the splunk doc link for the same if you know]
hi all, how can i send the same data from one universal forwarder to multiple universal forwarder ? is there a way to configure this ? if yes, please tell me the process.
I have a csv file that I upload through Lookup Editor which have a Time column in this format 15/06/2021 14:35:00 I want to convert it to Splunk readable time or an Unix time format so I can filter... See more...
I have a csv file that I upload through Lookup Editor which have a Time column in this format 15/06/2021 14:35:00 I want to convert it to Splunk readable time or an Unix time format so I can filter out the row between two certain date (between 14/06/2021 and 7/7/2021). I have try |inputlookup sample.csv |eval time = strptime(Time,"%m/%d/%Y %I:%M:%S %p") |table time But it return "No result found". How do I go about this? Or my strptime have any errors in formatting?
Need to extract fields from the below raw data currently no fields automatically extracted. Raw Event: Server: autoparts01, Userid: monika, Alias: autoparts01monika, Return Code: 400, Password Len... See more...
Need to extract fields from the below raw data currently no fields automatically extracted. Raw Event: Server: autoparts01, Userid: monika, Alias: autoparts01monika, Return Code: 400, Password Len: 32, Host: ELKSPL3212, Execution ID: autodr1, Directory: C:\windows\system32, Program: C:\windows\Sys64\dllhost.exe, Elapsed Time: 0, Bypass Cache: false, Type: Windows dll - 0, Version: 3.6 Output Sample: need regex and the fields are every separated by (,) Server: autoparts01 to Server=autoparts01 Userid: monika to Userid=monika
I have installed, correctly configured and repeatedly check the settings for two apps to get data into Splunk however the data is not appearing in searches. I have read every document I could and tri... See more...
I have installed, correctly configured and repeatedly check the settings for two apps to get data into Splunk however the data is not appearing in searches. I have read every document I could and tried everything possible but no success so far.  Other Splunk apps installed are working as designed. 1) Splunk add-on for IIS -The add on is correctly configured according to the documentation, and there is no communication issue between the server and Splunk enterprise as other data from that server does appear in searches. -.conf files and everything else are pointed at the correct location -I have tried indexing it to the default index as well as the IIS index. 2) Cisco Add-On for Splunk Enterprise (TA-cisco_ios)  -App is configured according to the documentation. -The cisco switch model is supported by the app -UDP port 514 is configured on Splunk  -No network security devices to prevent cause communication issues between the switch and Splunk. Of note is that the servers CPU is constantly near or at 100% at all times (Splunk is optimized and configured to reduce usage, the server is multi-use) *Upgrading the server is not an option at this time. *Moving to cloud is not an option. Any thoughts/suggestions/fixes would be greatly valued.
We have the following -    <input type="dropdown" token="Status" searchWhenChanged="false"> <label>Job Status</label> <choice value="*">ALL</choice> <choice value="SUCCESS">... See more...
We have the following -    <input type="dropdown" token="Status" searchWhenChanged="false"> <label>Job Status</label> <choice value="*">ALL</choice> <choice value="SUCCESS">SUCCESS</choice> <choice value="FAILURE">FAILURE</choice> <choice value="RUNNING">RUNNING</choice> <default>*</default> <initialValue>*</initialValue> </input>     In the code the following works just fine -   | where STATUS = "$Status$"     Except for the ALL token as the code would be -   | where STATUS = "*"   Instead of -   | where STATUS = *     What can be done?
Context: New Search View.  I am not referring to Dashboards (which have many auto-run posts). I often develop searches in Verbose mode over a very small timespan, then move to Fast mode over a larg... See more...
Context: New Search View.  I am not referring to Dashboards (which have many auto-run posts). I often develop searches in Verbose mode over a very small timespan, then move to Fast mode over a large timespan (Smart mode doesn't work well for me). What drives me crazy is when I either change the mode or select a time preset, the Splunk UI automatically begins a search. I want to change both the mode and the timespan before starting a search.   Is there a way to disable automatically running search -- so that it only runs a search after an explicit "enter" / search button click?
Hello Splunkers I have a below query, that am trying to get a count by field values, I am working on creating a dynamic dashboard that I have the below three fields as three  dropdown inputs.  So... See more...
Hello Splunkers I have a below query, that am trying to get a count by field values, I am working on creating a dynamic dashboard that I have the below three fields as three  dropdown inputs.  So, How can I make the token dynamic here, like when I choose only one value from one of the three dropdown inputs that value should populate on the stats command token. Thanks in advance! index=<> sourcetype=<> | search Type=$Token$ | search User=$Token$ | search Site=$Token$ | stats count by $Token$
I have a field value in splunk with the below format  :-    field_X = "AB 012 - some text here! ---- HOST INFORMATION: ---- Source: 1.1.2.3 ---- DETAILS: -- Destination ports: 777 33 -- Occurrences... See more...
I have a field value in splunk with the below format  :-    field_X = "AB 012 - some text here! ---- HOST INFORMATION: ---- Source: 1.1.2.3 ---- DETAILS: -- Destination ports: 777 33 -- Occurrences: 2244 -- Destination ip counts: 146 -- Actions: blocked -- Order Techniques : X3465 " Now How can I split the abpve field value into multiple lines to make it more user redable using eval and regex field_X = AB 012 - some text here! HOST INFORMATION: Source: 1.1.2.3 DETAILS: Destination ports: 777 33 Occurrences: 2244 Destination ip counts: 146 Actions: blocked Order Techniques : X3465   All I wanted is replace "--" with a line space or something to divide the field into multiple lines from 1 line?
1. I have installed universal forwarder and have a Splunk cloud account. 2. Installed Splunk using this command /opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl. 3. restarted to... See more...
1. I have installed universal forwarder and have a Splunk cloud account. 2. Installed Splunk using this command /opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl. 3. restarted to get changes into effect. no logs in Splunk cloud index= "*" found nothing
So I have a macro that has a field variable that I want to use a wildcard and worse the field names tend to have dots.  So a good field would be body.system.diskio.write.bytes and I tried using the f... See more...
So I have a macro that has a field variable that I want to use a wildcard and worse the field names tend to have dots.  So a good field would be body.system.diskio.write.bytes and I tried using the following: LIKE($field$, "body_system_diskio%") with the idea is if would error if the field did not at least contain body.system.diskio.  I put the underscores in as im not sure it could handle the dots.  This does not work for me.  Anyone know what im doing wrong here?   EDITED :  I only had two options for conditionals and ended up getting it to work with match($BodySystemDiskIoBytes$, "body.system.diskio.write.bytes|body.system.diskio.read.bytes")
Team, Time difference between end_task_date and start_task_date is coming null. Could you please take a look below and let me know what's wrong in my query. SPL: index=cloud sourcetype=lambda:A... See more...
Team, Time difference between end_task_date and start_task_date is coming null. Could you please take a look below and let me know what's wrong in my query. SPL: index=cloud sourcetype=lambda:Airflow2Splunk "\"logGroup\"" "\"airflow-OnePIAirflowEnvironment-DEV-Task\"" | rex field=_raw "Marking task as (?<status>[^\.]+)" | where status IN("FAILED", "SUCCESS", "UP_FOR_RETRY") | rex field=_raw "dag_id=(?<dag_id>\w+)" | rex field=_raw "task_id=(?<task_id>\w+)" | rex field=_raw "start_date=(?<task_start_date>\d{8}T\d{6})" | rex field=_raw "end_date=(?<task_end_date>\d{8}T\d{6})" | eval start_task_date=strptime(task_start_date,"%Y%m%dT%H%M%S") | eval start_task_date=strftime(start_task_date,"%Y-%m-%d %H:%M:%S") | eval end_task_date=strptime(task_end_date,"%Y%m%dT%H%M%S") | eval end_task_date=strftime(end_task_date,"%Y-%m-%d %H:%M:%S") | eval diff=end_task_date-start_task_date | eval diff=strftime(diff, "%H:%M:%S") | sort - _time | table dag_id task_id status start_task_date end_task_date diff the value for diff column is coming null. for other columns i'm getting the data correctly. Here are sample values for date fields: end_task_date: 2022-04-06 20:51:11  2022-04-06 20:54:09 end_task_date: 2022-04-06 20:51:09 2022-04-06 20:52:07 end_date: 20220406T205111 20220406T205409 start_date: 20220406T205109 20220406T205207 Please let me know if you need additional information pertaining to this request. Thanks, Sumit
I upgraded the Heavy Forwarders in my environment to Splunk enterprise 8.2.5 and figured out today on the day of upgrade that I stopped receiving data in one of my indexes.    By searching  event... See more...
I upgraded the Heavy Forwarders in my environment to Splunk enterprise 8.2.5 and figured out today on the day of upgrade that I stopped receiving data in one of my indexes.    By searching  events in the index prior to my upgrade, I was able to figure out that the host the events are being received from is running Windows 2008 R2 (running a Splunk UF version 7.2.2) - that may have something to do with this. I am trying to further troubleshoot and figure out how the data is being brought into that index but I am not a seasoned splunk veteran by any means.    Searching around for answers to this has been a bit convoluted. Could anyone help me through the process of tracking down how that data is being brought into that index? I'm thinking this may have something to do with lack of compatibility for HTTPS from the host to the heavy forwarder. Any help or guidance is much appreciated.
I have the following data :  query="select field  from table where (status!="Y")  and ids.id IN ["123","145"] limit 500" params="{}" How can I extract the field query ignoring the special charact... See more...
I have the following data :  query="select field  from table where (status!="Y")  and ids.id IN ["123","145"] limit 500" params="{}" How can I extract the field query ignoring the special characters?  My query .... | table query params Is cut off when it reaches the status field.  How can I extract everything regardless of the special characters?  Is there any recommended format that I should change my logs to make them easier to parse?  
Hello Splunkers, I have data where the index time is different from the actual file.The source has the correct date and time.I want  splunk to use it as index time and if that is not possible I want... See more...
Hello Splunkers, I have data where the index time is different from the actual file.The source has the correct date and time.I want  splunk to use it as index time and if that is not possible I want to extract date and time from the source field ,create new field and use that as date and time field.Below is the example of the source file.   Source: /admin/logs/abc/inventory/04-04-2022-101634-all-b5.xxx   1. I want splunk to take 04-04-2022-101634 and use it as indexed time like  04-04-2022 -10:16:34(dd-mm-yyyy-hh:mm:ss) . I want the props.conf  2. Also if the data is indexed without using the source file .I want to extract teh date and time from the source and create and new field called correct_time as use as _time     Thanks in Advance
I'm trying to understand the different capabilities within Splunk to see how they can be used for my advantage. Was exploring apps_backup and apps_restore splunk capabilities to see if they can be... See more...
I'm trying to understand the different capabilities within Splunk to see how they can be used for my advantage. Was exploring apps_backup and apps_restore splunk capabilities to see if they can be used with API for backing up and restore of specific apps without the actual back-end access and having to replicate the "$SPLUNK_HOME/etc/" directory. There is one mention of apps_restore within the below document mentioning it can be used on the endpoint "apps/restore": https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Authorizeconf I noticed the restmap.conf also has entries for "apps/backup" and "apps/restore" but I could find no documentation on the usage of these endpoints and methods. Could someone point me in the right direction here?
I am trying to use the rest endpoints for the Microsoft Azure Add-on for Splunk TA-MS-AAD When posting the following:   localhost:8089/servicesNS/nobody/TA-MS-AAD/admin/TA_MS_AAD_account/te... See more...
I am trying to use the rest endpoints for the Microsoft Azure Add-on for Splunk TA-MS-AAD When posting the following:   localhost:8089/servicesNS/nobody/TA-MS-AAD/admin/TA_MS_AAD_account/testacct/create?username=myazidhere&password=lslslslslsslsl I receive following error: "ERROR">&lt;class 'splunk.admin.BadProgrammerException'&gt;: This handler claims to support this action (4), but has not implemented it.   As you can see, I am trying to create our credentials via this API to support automated installation/configuration of this add on   Has anyone been able to use the rest endpoints with this Add On? any consumer examples would be great - thanks  
I have Splunk_TA_nix installed and ps.sh enabled on my Apache storm nimbus instances.  I can run a general ps sourcetype query on a service I know should always be running like rhnsd and get events b... See more...
I have Splunk_TA_nix installed and ps.sh enabled on my Apache storm nimbus instances.  I can run a general ps sourcetype query on a service I know should always be running like rhnsd and get events back just fine ...     index=os host="my-stormn-1" sourcetype=ps rhnsd      However, when I do the same for the "stormnimbus" service I get zero events back ...     index=os host="my-stormn-1" sourcetype=ps stormnimbus     Meanwhile, a "sudo systemctl status stormnimbus" on the my-stormn-1 instance itself shows that it is active and running.  I'm having the same problem also with the stormui service as well as the stormsupervisor service running on my storm supervisor instances.  I should note that I do have Splunk_TA_nix installed on my splunk indexers.  Any advice as to why these services are not returning events with ps and how to fix it would be greatly appreciated.