All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When I am running this search I am not getting the results for EventType=4769:   index=main  (EventCode=4634 OR EventCode=4624 OR EventCode=4769) Logon_Type!=3   If I run the search index=main (E... See more...
When I am running this search I am not getting the results for EventType=4769:   index=main  (EventCode=4634 OR EventCode=4624 OR EventCode=4769) Logon_Type!=3   If I run the search index=main (EventCode=4769) OR Logon_Type!=3 I get the results for 4769.   What can I do with the first search to include the results for 4769?   Thanks in advance!
Hi all, I have installed the splunk enterprise on server and deploy it on 8000 port. Also add the receiver indexer configuration on splunk enterprise link with port 9997 and universal forwarder is ... See more...
Hi all, I have installed the splunk enterprise on server and deploy it on 8000 port. Also add the receiver indexer configuration on splunk enterprise link with port 9997 and universal forwarder is installed on another server to collect and get the logs on splunk enterprise. At the time of installation i have entered the deployment server IP address  as server IP address where splunk enterprise is installed and port no (8000) and receiver indexer configuration set to port 9997. After installation i have restarted the splunkuniversalforwarder. But logs are not getting at splunk enterprise. So kindly assist me in this matter.  
Good morning , I have a monitoring for a server(A) and I want to create a new monitoring for a new server (B) by using the same settings (directories) used for the server (A) :       
Hi I am using db connect latest version on splunk 8.0.1 and I want to update host field based on one of column name instead of static value. How can I achieve this? Also does it improve search perf... See more...
Hi I am using db connect latest version on splunk 8.0.1 and I want to update host field based on one of column name instead of static value. How can I achieve this? Also does it improve search performance while searching using host field over the column field name value search? Thanks,
I was working with where command like below-   index=abc|where (id=1ORid=2ORid=3)   In between  id field I have used OR operator and by mistake I haven't used space before and after OR.  sti... See more...
I was working with where command like below-   index=abc|where (id=1ORid=2ORid=3)   In between  id field I have used OR operator and by mistake I haven't used space before and after OR.  still I get same results like below query results.   index=abc|where (id=1 OR id=2 OR id=3)   so does it not matter to have space before and after OR operator?
Hi, I searched and found several tickets regarding my situation, but all lead to nowhere.  So, my situation... Unfortunately we have a few logs that mix formats eg starts in plain text and then con... See more...
Hi, I searched and found several tickets regarding my situation, but all lead to nowhere.  So, my situation... Unfortunately we have a few logs that mix formats eg starts in plain text and then contains a json payload.  The events are <4000 chars, so I can't see where the truncation is happening. I've also tried specifying the following in props/transforms with no difference.: TRUNCATE = 100000 MAXEVENTS = 500 MAXCHARS = 100000 DEPTH_LIMIT = 5000 I'm probably missing something obvious. My current props/transforms for this sourcetype are:   [opt:gateway] KV_MODE = none TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRANSFORMS-opt_json2 = optimus_dll1,optimus_dll2 SEDCMD-eol = s/\\r\\n//g LINE_BREAKER=([\r\n]+) [optimus_dll1] REGEX = ^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}\s+\[\S+\]\s+(?P<LogLevel>[^ ]\w+)\s+(?P<OptimusDLL>[^ ]+) WRITE_META = true REPEAT_MATCH = false FORMAT = LogLevel::$1 OptimusDLL::$2 [optimus_dll2] REGEX = ^(?:[^$]*)\s-\s(?P<json>.+) FORMAT = json::$1 WRITE_META = true REPEAT_MATCH = false SOURCE_KEY = _raw DEPTH_LIMIT = 5000   optimus_dll1 is extracted as expected, optimus_dll2 only grabs the first 1000 chars of the match. Using the regex inline via rex, via regex101 or cmdline all extract the full field. TIA Steve
Hey Splunkers! I have several events from a particular index, and am looking to extract field value pair from one of the fields. Sample  event: Description Type Attribute: environment=PROD\... See more...
Hey Splunkers! I have several events from a particular index, and am looking to extract field value pair from one of the fields. Sample  event: Description Type Attribute: environment=PROD\nAttribute: severity=MAJOR\nAttribute: time_ins=2020-11-30T17:45:33\nAttribute: affected_aspect=Exit\nAttribute: plane=Prod\nAttribute: workflow_state=New ALERT   I need each of these attributes as another column in the search. environment severity time_ins affected_aspect plane workflow_state Type prod MAJOR 2020-11-30T17:45:33 Exit Prod New ALERT   Can someone please guide me? Thank you!
Hi All, Need help in the Duration filter. Code:    index=opennms "ciscoLwappApIfUpNotify" OR "ciscoLwappApIfDownNotify" | rex field=eventuei "ciscoLwappApIf(?<Status>.+)" | stats max(_time) as Ti... See more...
Hi All, Need help in the Duration filter. Code:    index=opennms "ciscoLwappApIfUpNotify" OR "ciscoLwappApIfDownNotify" | rex field=eventuei "ciscoLwappApIf(?<Status>.+)" | stats max(_time) as Time latest(Status) as Status by nodelabel | where Status="DownNotify" | fieldformat Time=strftime(Time,"%Y-%m-%d %l:%M:%S") | eval Downtime = now() - Time | eval Downtime = tostring(Downtime, "duration") | rex field=Downtime "(?P<Downtime>[^.]+)" | table nodelabel, Status, Downtime, Time     Sample output:  nodelabel Status Downtime Time USBTNBTECE DownNotify 0:12:02 12/9/2020 2:36 USJOLWLC DownNotify 1:31:21 12/9/2020 2:17 USMBP DownNotify 2:08:25 12/9/2020 1:39   Requirement is.:  Filter/remove all those values less than 1 hr Downtime. Tried all possibilities  "| where duration >3600"  but no output coming when giving this. Please suggest a solution. 
Hi,  I removed several indexes from indexes.conf and after the apply, I found that the number of indexes are not the same : In the monitoring console, in "Indexes an Volumes: Instance" tab, I f... See more...
Hi,  I removed several indexes from indexes.conf and after the apply, I found that the number of indexes are not the same : In the monitoring console, in "Indexes an Volumes: Instance" tab, I found another number :  When I run |rest /services/data/indexes and filter on indexes, the count is 376. Can you explain me why and how to get the same number every place ?       
Hello, I am new to Splunk Cloud. I had created a trial account and was trying to post raw events to Splunk Cloud. my cloud url - https://prd-p-ty649.splunkcloud.com   Post URL - https://... See more...
Hello, I am new to Splunk Cloud. I had created a trial account and was trying to post raw events to Splunk Cloud. my cloud url - https://prd-p-ty649.splunkcloud.com   Post URL - https://http-input-prd-p-ty649.splunkcloud.com:8088/services/collector/raw/1.0?channel=fdd8be30-edab-4a92-acdb-b7ed8ec49878&host=scp.dev.com&index=scpi_dev&source=mpl.logs&sourcetype=scpi_mpl but i always get the error 503 - Service Unavailable Please Help !!!
Hi,  Showing permission denied to submit a support ticket
When mobile application leads to crash, it doesn't capture crash details like exception name, path and doesn't showup any events prior to crash and not showing any user data.
Hi All Splunker, May I know how to set frozenTimePeriodInSecs under a different App? Example Compliance App retention period is 180 days for all index name, Audit App retention is 90 days for all i... See more...
Hi All Splunker, May I know how to set frozenTimePeriodInSecs under a different App? Example Compliance App retention period is 180 days for all index name, Audit App retention is 90 days for all index name, due to every time need create a lot of index name under those App, for convenient and fulfill the compliance retention needed, can we set frozenTimePeriodInSecs under App-level instead of System-level or one by one index name (avoid human mistake also)? I got try it under $SPLUNK_HOME/etc/app/compliance/local and $SPLUNK_HOME/etc/app/audit/local under each app configure indexes.conf by input [default] stanza and set the frozenTimePeriodInSecs field for 180 days and 90 days and disabled global setting under System/local/indexes.conf. However, seem like it was failed due to precedence order issue. 1. System local directory -- highest priority 2. App local directories 3. App default directories 4. System default directory -- lowest priority If the same level of App local directories the default always gets the smaller value which is 90 days. So any expert over here can provide some advice?
Hi Team, I am having a huge environment where the activity of decomm of servers happens once in a month and we need to check the traffic of each server in the splunk. But I wanted to explore the bu... See more...
Hi Team, I am having a huge environment where the activity of decomm of servers happens once in a month and we need to check the traffic of each server in the splunk. But I wanted to explore the bulk search option in splunk where I can import the servers names through a excel/CSV file and the search should be made and I need to get the traffic results statistics
My the Phantom app's phantom_forwarding.log generated such logs: phantom_forward:129 - C:\Program Files\Splunk\etc\apps\phantom\bin\scripts\phantom_forward.py called without a session token. Describ... See more...
My the Phantom app's phantom_forwarding.log generated such logs: phantom_forward:129 - C:\Program Files\Splunk\etc\apps\phantom\bin\scripts\phantom_forward.py called without a session token. Describe my current situation: I am able to send events to Phantom with a saved search using the Phantom add-on. However, to send events to Phantom, I have to manually press the "Send to Phantom" button, phantom can receive the event. But the Phantom add-on can't  automatically forward events to phantom,  error logs appear in the phantom_forwarding.log. How to solve the error in the phantom_forwarding.log?
I am trying to create a query using tstats from datamodel Malware, one of the sourcetype 'abc'  that i want to include is coming up in index search but not in datamodel tstats search, the index is al... See more...
I am trying to create a query using tstats from datamodel Malware, one of the sourcetype 'abc'  that i want to include is coming up in index search but not in datamodel tstats search, the index is already mapped with Malware datamodel, is this possible?
Hello i have this query : |datamodel events_prod events summariesonly=true flat | search _time>=1597968172.000 _time<=1598146450.0001 eventtype="csm-messages" tail_id=AN | eval crate_pa... See more...
Hello i have this query : |datamodel events_prod events summariesonly=true flat | search _time>=1597968172.000 _time<=1598146450.0001 eventtype="csm-messages" tail_id=AN | eval crate_path=source | rename kafka_uuid as uuid, _time as timestamp, _raw as data | fields uuid, timestamp , data, crate_path | dedup uuid | sort 0 - timestamp | head 1000   i want to add at the end total count of the events.. if im using append the query is running for long time. any suggestions ? thanks
Hello Splunkers. First of all, I'm sorry because my english is not good. I am using Splunk DB Connect 2.4.1 on Splunk 7.2.6 and I got 3 problems need to help. 1. Error with detail is empty Lookin... See more...
Hello Splunkers. First of all, I'm sorry because my english is not good. I am using Splunk DB Connect 2.4.1 on Splunk 7.2.6 and I got 3 problems need to help. 1. Error with detail is empty Looking at _internal, I saw these following errors (error ="" )   2020-12-09T10:44:30+0700 [CRITICAL] [mi_input.py], line 61 : action=loading_input_data_failed input_mode=tail dbinput="mi_input://DATA" error="" 2020-12-09T10:44:30+0700 [CRITICAL] [ws.py], line 327: [DBInput Service] Exception encountered for entity-name = mi_input://DATA and type = input with error = . 2020-12-09T10:44:30+0700 [INFO] [mi_base.py], line 190: action=caught_exception_in_modular_input_with_retries modular_input=mi_input://DATA retrying="1 of 6" error= Traceback (most recent call last): File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/mi_base.py", line 183, in run checkpoint_value=checkpoint_value) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/health_logger.py", line 283, in wrapper return get_mdc(MDC_LOGGER).do_log(func, *args, **kwargs) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/health_logger.py", line 160, in do_log return func(*args, **kwargs) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/mi_input.py", line 205, in run _do_tail_mode(input_name, inputws, self.db, params, self.user_name, splunk_service, output_stream) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/mi_input.py", line 57, in _do_tail_mode inputws.doTail(db, params, user, stanza, callback=callback) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/ws.py", line 281, in doTail self.doInput("dbinputTailIterator", database, params, user, entityName, callback) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/ws.py", line 275, in doInput self.ws.run_forever(timeout=self.timeout) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/websocket.py", line 841, in run_forever self._callback(self.on_error, e) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/websocket.py", line 852, in _callback callback(self, *args) File "/u01/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/ws.py", line 328, in on_error raise Exception ("%s" % error) Exception Collapse   Have anyone seen this before? why it happened and how to fix them. 2. Database input job works very slow. For some time, everything was OK, but all of sudden Splunk stopped indexing new data. Looking at _internal, I saw only this one message:   2020-12-09T10:44:33+0700 [INFO] [mi_input.py], line 193: action=start_executing_dbinput dbinput="mi_input://DATA"   In case job works fine, it will have many massages like below:   2020-12-09T10:44:33+0700 [INFO] [mi_input.py], line 193: action=start_executing_dbinput dbinput="mi_input://DATA" 2020-12-09T10:44:29+0700 [INFO] [modular_input_event_writer.py], line 113: action=print_csv_from_jdbc_to_event_stream dbinput="mi_input://DATA" input_mode=tail events=300 2020-12-09T10:44:29+0700 [INFO] [mi_input.py], line 109: action=rising_column_checkpoint_updated dbinput="mi_input://DATA" checkpoint=8068170343 2020-12-09T10:45:52+0700 [INFO] [mi_input.py], line 193: action=complete_dbinput dbinput="mi_input://DATA"   I tried to check by query on database connect app interface, result very fast. So i think, database input job got problem. 3. Enscrypt/hass field before indexing I am using Splunk DB Connect 2.4.1 on Splunk 7.2.6. Some fields of data are case sentitive, ex: card_number. So i edited code in modular_input_event_writer.py  file in the DB connect app, it will hass card_number field to new field called hass_number. It work fine. With Splunk DB Connect 3.x version. I cant enscrypt data field by the that way because the DB connect 3.x use java and python 3. It very different than 2.4.1 version. So is there any way to encrypt a data field before splunk indexing? Thanks in advance.
Hi, I am trying to use the custom command to read the data from other database and return to Splunk as query results. I understand that a custom command can be developed in python script and I'm thi... See more...
Hi, I am trying to use the custom command to read the data from other database and return to Splunk as query results. I understand that a custom command can be developed in python script and I'm thinking that python script can read the data from the database and return to Splunk.  I have few questions here:  When we execute the custom search command in Splunk search GUI - does custom search command requests goes to Indexer?  In other words, do I need the indexers when i just run only the custom search command in my Splunk distributed environment? The custom search command had to retrieve lots of data from database. So, the custom search can be used for large volume of databases and performance issues can be anticipated for custom search commands?  How Splunk handles the custom search commands in backend - when there are multiple users execute the custom search command - Splunk executes the python script in the same thread (or) in different thread per user search request?   Thanks
Hi, I have a lookup file with the entire list of service names,now i want to perform a search to have the count of the service and and for the service not present in logs for the selected time range... See more...
Hi, I have a lookup file with the entire list of service names,now i want to perform a search to have the count of the service and and for the service not present in logs for the selected time range but present in lookup file,the count has to be shown as 0 Please assist @niketn