All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I requested a Dev license a while ago, but I don't hear anything from Splunk anymore. I have re-requested it a couple times, but still no answer. I even emailed Splunk, yet even that email is be... See more...
Hi, I requested a Dev license a while ago, but I don't hear anything from Splunk anymore. I have re-requested it a couple times, but still no answer. I even emailed Splunk, yet even that email is being ignored. I am new to Splunk and I just want to get started with the Developer license. How do I get my request to be approved? As in for real now, as I already attempted every standard solution. I just want somebody to approve my request, that's all.
I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But inc... See more...
I have lookup file bad_domain.csv baddomain.com baddomain2.com baddomain3.com   Then i want to search from proxy log, who people connect to bad domains in my lookup list. But include subdomains. Example: subdo1.baddomain.com subdo2.baddomain.com subdo1.baddomain2.com Please help, how to create that condition in spl query?
Is it possible to take splunk Admin certification after Splunk power user certification expired?
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the config... See more...
Did someone ever faced or implementing this on Splunk ES?. Im facing an issue when try add TAXII feed from OTX API connection, i already check the connectivity, and made some changes on the configuration until disable the prefered captain on my search head, but it still not resolved. I also know there is an app for this, but just want to clarify are this option still supported or not. Here my POST argument URL: https://otx.alienvault.com/taxii/discovery POST Argument: collection="user_otx" taxii_username="API key" taxii_password="foo" But the download status keep on TAXII feed pooling starting, and when i check on the PID information  status="This modular input does not execute on search head cluster member" msg="will_execute"="false" config="SHC" msg="Deselected based on SHC primary selection algorithm" primary_host="None" use_alpha="None" exclude_primary="None"  
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR... See more...
Hi! Is it possible to integrate the app to multiple servicenow instances? If yes, how to "choose" the one you want to create the incident to? For example if I am using the: | snowincidentstream OR | snowincidentalert  
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be tra... See more...
I have around 60 standalone windows laptops that are not networked. I looking to install a UF to capture the windows logs and have them stored on the local drive "c:\logs" The logs will then be transfered to a USB for archiving and indexed into splunk for NIST800 compliance eg login success/failure. I am struggling to find the correct syntax for the UF to save locally as it asks for a host and Port.   josh
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sen... See more...
There is no Pattern or punctuation so running Regex might not work in this situation since I cant know what kind of Error or pattern will appear in the final line/sentence in the field. the last sentence can be anything and unpredictable so just wanted to see if there is a way to grab the last line of log that is in the field. This example most likely wont help but paints a picture that I just want the last line. index=example |search "House*" |table Message log looks similar like this: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example /local/line499 D://example ......a bunch of sensative information D://example /crab/lin650 D://example ......a bunch of sensative information D://user/local/line500 Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : someone stepped on the wire. Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://user/local/line980 ,indo Next example: Starting logs( most recent logs) : D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information D://example ......a bunch of sensative information Error : Simon said Look Goal: D://user/local/line500 Error : someone stepped on the wire. D://user/local/line980 ,indo Error : Simon said Look  I hope this makes sense....
The below log entry includes different format within it. Not sure how to write props.conf for proper field extractions and line breaking. each log entry has text, delimitter(|) and json. 2024-03-11T... See more...
The below log entry includes different format within it. Not sure how to write props.conf for proper field extractions and line breaking. each log entry has text, delimitter(|) and json. 2024-03-11T20:58:12.605Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Create","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXX/XXXXX.smil/transmux/XXXXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":360,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXX","wflow":"System_default"} 2024-03-11T20:58:12.611Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Cache","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXXXX/XXXXXX.smil/transmux/XXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":0,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXXXXXXX","wflow":"System_default"}
Hello, need help for auto multi select of the input values... So I have a Index values like data1, data2, data3. If I select data1 the sourcetype related to data1 should be auto selected, if i ... See more...
Hello, need help for auto multi select of the input values... So I have a Index values like data1, data2, data3. If I select data1 the sourcetype related to data1 should be auto selected, if i multislect data1 & data2 in the index it has to auto select in multi sourcetype
I have an alert that can clear in the same minute that it originally fired.  When the correlation search runs, both events are in it, the alert and the clearing alert.  The correlation search creates... See more...
I have an alert that can clear in the same minute that it originally fired.  When the correlation search runs, both events are in it, the alert and the clearing alert.  The correlation search creates notable events for each but uses the current time for the _time for the notable events and not the _time from the original alerts.  Since both alerts are converted into notable events during the same correlation search run, they get the exact same timestamp.  This causes ITSI to not definitely know the correct order of the events and it sometimes thinks the Normal/Clear event came BEFORE the original alert. This seems odd to me.  I would have imagined that ITSI would use the original event time as the _time for the notable event but it doesn't. Any ideas on how to address?   
Hello.  I have a data source that is "mostly" json formatted, except it uses single quotes instead of double, therefore, splunk is not honoring it if I set the sourcetype to json. If I run a query ... See more...
Hello.  I have a data source that is "mostly" json formatted, except it uses single quotes instead of double, therefore, splunk is not honoring it if I set the sourcetype to json. If I run a query against it using this: sourcetype="test" | rex field=_raw mode=sed "s/'/\"/g" | spath it works fine, and all fields are extracted. How can I configure props and transforms to perform this change at index time so that my users don't need to have the additional search parameters and all the fields are extracted by default, short of manually extracting each field? Example event, no nested fields: {'date': '2024-02-10', 'time': '18:59:27', 'field1': 'foo', 'field2': 'bar'}
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splu... See more...
I was wondering if there was a splunk app or a feature available to have a search bar when filtering by Splunk App.    Every time, you have to scroll for a bit just looking for the correct Splunk app, even if its just the search app. Is there a way to add a search bar for the apps? We have on for other pages and options.     I may be overlooking something. 
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have chec... See more...
Hello! I am trying to upgrade to the latest version of Splunk Enterprise 9.3 on a RHEL 8 server, but I am getting this error message after accepting the license. Any one seen this error? I have checked the permissions, and they are all fine. Thanks! Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 846, in exec_module File "<frozen importlib._bootstrap_external>", line 982, in get_code File "<frozen importlib._bootstrap_external>", line 1039, in get_data PermissionError: [Errno 1] Operation not permitted: '/opt/splunk/lib/python3.9/encodings/__init__.py'
Hi, I have my Splunk Dashboard created in Dashboard studio. The dashboard has 3 tables and all the values in this tables are either Left or Right aligned but I want them to be Center aligned. I... See more...
Hi, I have my Splunk Dashboard created in Dashboard studio. The dashboard has 3 tables and all the values in this tables are either Left or Right aligned but I want them to be Center aligned. I tried finding solutions, but all the solutions mentioned in other posts are for the Classic dashboards which are written in XML. How can we do this in JSON written Dashboard. Thanks, Viral
Hello All, I have a lookup file which stores data of hosts across multiple indexes.  I have reports which fetch information of hosts from each index and updates the records in lookup file. Can ... See more...
Hello All, I have a lookup file which stores data of hosts across multiple indexes.  I have reports which fetch information of hosts from each index and updates the records in lookup file. Can I run parallel search for hosts related to each index and thus parallelly update the same lookup file? Or is there any risk of performance, consistency of data? Thank you Taruchit
Hi Splunkers The idea is to pull any new file creations on a particular folder inside C:\users\<username>\appdata\local\somefolder i wrote a batch script to pull and index this data. its working bu... See more...
Hi Splunkers The idea is to pull any new file creations on a particular folder inside C:\users\<username>\appdata\local\somefolder i wrote a batch script to pull and index this data. its working but the issue is i cannot define a token for users. eg: In script if i mention the path as C:\users\<user1>\appdata\local the batch script will run as expected an data will be indexed to splunk but if i mention the user1 as %userprofile% or %localappdata% the batch script is not running. How to resolve this    
  The following query retrieves confroom_ipaddress values from the lookup table that do not match IP addresses found in the indexed logs: | inputlookup lookup_ist_cs_checkin_rooms.csv where NOT [s... See more...
  The following query retrieves confroom_ipaddress values from the lookup table that do not match IP addresses found in the indexed logs: | inputlookup lookup_ist_cs_checkin_rooms.csv where NOT [search index=fow_checkin message="display button:panel-*" | rex field=message "ipaddress: (?<ipaddress>[^ ]+)" | stats values(ipaddress) as confroom_ipaddress | table confroom_ipaddress] | rename confroom_ipaddress as ipaddress1 I would like to add an additional condition to include IP addresses that match those found in the following logs:   index=fow_checkin "Ipaddress(from request header)" | rex field=message "IpAddress\(from request header\):\s*(?<ip_address>\S+)$" | stats values(ip_address) as ip_address2 This means we need to include IP addresses from lookup_ist_cs_checkin_rooms.csv that match with the message "Ipaddress(from request header)" andexclude IP addresses from lookup_ist_cs_checkin_rooms.csv that match with the message "display button:panel-*"  as well. Please help.
Hi all, hoping someone can help me with this query. i have a data set that looks at a process and how long it takes to implement. for example, each event will be populated with a start date and an... See more...
Hi all, hoping someone can help me with this query. i have a data set that looks at a process and how long it takes to implement. for example, each event will be populated with a start date and an end date. i want to create a calendar view that shows the schedule of the processes in implementation, for example: process 1 start date 12/08/2024, end date 16/08/2024 (5 days implementation) process 2 start date 12/08/2024, end date 12/08/2024 (1 day implementation) process 3 start date 13/08/2024, end date 15/08/2024 (3 days implementation) process 4 start date 14/08/2024, end date 16/08/2024 (2 days implementation) I want to be able to produce a graph or a calendar view that will show how many process' we have in implementation, counting each day of their implementation period (based on start and end date) so for the above example it would look like: Date                        count of Process' in implementation 12/08/2024       2 (process 1 and 2) 13/08/2024       2 (process 1 and 3) 14/08/2024       3 (process 1, 3 and 4) 15/08/2024       3 (process 1, 3 and 4) 16/08/2024       2 ((process 1 and 4) any help greatly appreciated 
Hi, I want to setup a home lab like splunk Enterprise and splunk forwarder on the same os to pull the logs into splunk. Is it possible to setup in this way.  
Hello everyone, Please check the below data : ERROR 2024-08-09 14:19:22,707 email-slack-notification-impl-flow.BLOCKING @3372f96f] [processor: email-slack-notification-impl-flow/processors/2/rout... See more...
Hello everyone, Please check the below data : ERROR 2024-08-09 14:19:22,707 email-slack-notification-impl-flow.BLOCKING @3372f96f] [processor: email-slack-notification-impl-flow/processors/2/route/0/processors/0; event: 5-03aca501-42b3-11ef-ad89-0a2944cc61cb] error.notification.details: { "correlationId" : "5-03aca501-42b3-11ef-ad89-0a2944cc61cb", "message" : "Error Details", "tracePoint" : "FLOW", "priority" : "ERROR", } ERROR 2024-08-09 14:19:31,389 email-slack-notification-impl-flow.BLOCKING @22feab4f] [processor: email-slack-notification-impl-flow/processors/2/route/0/processors/0; event: 38de9c30-49eb-11ef-8a9e-02cfc6727565] error.notification.details: { "correlationId" : "38de9c30-49eb-11ef-8a9e-02cfc6727565", "message" : "Error Details", "priority" : "ERROR", } The above 2 blocks of data are coming as one event but I want them to be 2 events each starting from keyword "Error". Below is my props.config entry for same but not working: applog_test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true BREAK_ONLY_BEFORE = date SHOULD_LINEMERGE = true TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX=ERROR\s+ Please help how to fix this. Thanks in advance!