All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, For registration of devices on splunk mobile instance we used to register by Splunk Cloud Gateway. But after upgrading splunk to 8.1 version, We have to register the devices through Splunk Secur... See more...
Hi, For registration of devices on splunk mobile instance we used to register by Splunk Cloud Gateway. But after upgrading splunk to 8.1 version, We have to register the devices through Splunk Secure Gateway.  Splunk Cloud Gateway is no longer valid on new version of Splunk. But now I am unable to register my device on Splunk Mobile instance using Splunk Secure Gateway.  I am getting Error 503 while registering. I have also copied the data from Cloud gateway to Secure gateway. Please feel free to provide your input on this problem.
Hello Experts,  We are trying to integrate Sailpoint with Splunk. We used the required add-on and all the necessary information for API however, after putting all the required information we are get... See more...
Hello Experts,  We are trying to integrate Sailpoint with Splunk. We used the required add-on and all the necessary information for API however, after putting all the required information we are getting the certificate error that stops the complete integration. Below are some of the sample error logs of Sailpoint integration.    File "/data/splunk/etc/apps/Splunk_TA_sailpoint/bin/splunk_ta_sailpoint/aob_py3/requests/api.py", line 60, in request     return session.request(method=method, url=url, **kwargs)   File "/data/splunk/etc/apps/Splunk_TA_sailpoint/bin/splunk_ta_sailpoint/aob_py3/requests/sessions.py", line 533, in request     resp = self.send(prep, **send_kwargs)   File "/data/splunk/etc/apps/Splunk_TA_sailpoint/bin/splunk_ta_sailpoint/aob_py3/requests/sessions.py", line 646, in send     r = adapter.send(request, **kwargs)   File "/data/splunk/etc/apps/Splunk_TA_sailpoint/bin/splunk_ta_sailpoint/aob_py3/requests/adapters.py", line 514, in send     raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='#hostname', port=8443): Max retries exceeded with url: /identityiq/oauth2/token (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1106)'))) Can someone please provide some input on the same so that we can proceed with the integration.  Thanks in advance  
11-09-2021 07:21:11.662 +0000 ERROR ExecProcessor [19962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/app... See more...
11-09-2021 07:21:11.662 +0000 ERROR ExecProcessor [19962 ExecProcessor] - Invalid user admin, provided in passAuth argument, attempted to execute command /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/python_upgrade_readiness_app/bin/pura_send_email.py
Hello all,   I am trying to extract the below highlighted fields, but the extractions at time is failing to get the required values, can you please help me get this working. 1) 537654 High 2021.11... See more...
Hello all,   I am trying to extract the below highlighted fields, but the extractions at time is failing to get the required values, can you please help me get this working. 1) 537654 High 2021.11.10 10:53:50 RDS_Failure_notification01 prd-Server2 127.0.0.1 sns.event EventSource : db-instance IdentifierLink : https://console.aws.amazon.com SourceId : prd-Server2 EventId : http://docs.aws.amazon.com EventMessage : DB instance restarted TopicArn : arn:aws:sns:ap-northeast-1:123456789:Lambda-PRD-Server1-SSS 2) 536465 High 2021.11.09 23:07:33 Server just booted [prd-Server1] prd-Server1 127.0.0.1 Server Status 00:04:44 3) 536438 High 2021.11.09 23:01:02 App Proxy: Utilization of unreachable poller processes over 80% prd-Server3 127.0.0.1 Utilization of unreachable poller data collector processes, in % 100 % 4) 448232 Average 2021.11.09 09:56:02 App Proxy: Utilization of unreachable poller processes over 70% prd-Server4 127.0.0.1 Utilization of unreachable poller data collector processes, in % 100 %   BOLD - Field1 Underlined -Field2 Strikethrough - Field3   @ITWhisperer @javiergn @richgalloway  Please have a look at this.   Thank you
Hey There,  Below I have a field in where ABC > 2500 cuz the value is actually 2800. So then If ABC>than 2500 add 1 day to the Human_readable field. I have already created the logic to adding 1 day ... See more...
Hey There,  Below I have a field in where ABC > 2500 cuz the value is actually 2800. So then If ABC>than 2500 add 1 day to the Human_readable field. I have already created the logic to adding 1 day to the Human_readable field.... Question now is how can I write the logic for it in a nested loop? So If ABC>2500 add 1 day to human readable.  This is my logic that I have thus far: | eval Then_Set=if(ABC>2500,strftime(strptime(Human_readable,"%B %d, %Y") +86400, "%B %d, %Y") This is what I have so far: | makeresults | eval ABC="2800", DEF="3", GHI="5" | eval rel_Time="11102021" | eval Epoch_Time=strpTime(rel_Time,"%m%d%Y") | eval Human_readable=strfTime(Epoch_Time, "%B %d, %Y") | eval Service=if(ABC>2500, "Send Alert", "No Alert") | eval Add_1Day=strftime(strptime(Human_readable,"%B %d, %Y") +86400, "%B %d, %Y") | eval Then_Set=if(ABC>2500,strftime(strptime(Human_readable,"%B %d, %Y") +86400, "%B %d, %Y") | table Service Epoch_Time Human_readable Add_1Day Then_Set
I have Splunk 7.3.6 with ES 6.0.2 on an on-prem Linux VM. I have an EC2 instance already setup with Splunk Core 8.1.5 where I want to migrate the ES app. Looking at various docs like Migrate from st... See more...
I have Splunk 7.3.6 with ES 6.0.2 on an on-prem Linux VM. I have an EC2 instance already setup with Splunk Core 8.1.5 where I want to migrate the ES app. Looking at various docs like Migrate from standalone searchheads and How to migrate, First doc is more about migrating from a standalone search head to an SHC, where it suggests to only migrate /etc/apps and /etc/users directory, however in the 2nd doc, which seems more closely relevant to what I want to achieve, it states, first I should copy over entire $SPLUNK_HOME directory on new system and then install Splunk on top of that. Not sure which one to follow. Also, incase of 2nd doc, I have done the opposite, I have installed Splunk first and now looking to copy existing ES SH's $SPLUNK_HOME, on top of that, but dont know if it would work ? Any suggestion ideas thoughts ?
Hi All, I have recently upgraded Splunk HF from 7.3.x to 8.1.2 and also the Cisco eStreamer (Encore) app from 3.6.x to 4.8.1. Both upgrades went fine and cisco:estreamer:data logs were coming in fine... See more...
Hi All, I have recently upgraded Splunk HF from 7.3.x to 8.1.2 and also the Cisco eStreamer (Encore) app from 3.6.x to 4.8.1. Both upgrades went fine and cisco:estreamer:data logs were coming in fine till 1.5 hours post-upgrade after which logs stopped coming in. The file  estreamer.log in /opt/splunk/etc/apps/TA-eStreamer/bin/encore doest show any ERROR ( INFO     Running. 3500 handled; average rate 4.86 ev/sec;). Also, I'm able to see logs populating in /opt/splunk/etc/apps/TA-eStreamer/data. However, it appears logs are not getting updated in cisco:estreamer:data sourcetype. There are other log sources relayed from the HF to cloud which do not have any issues (ruled out any network connectivity issues between HF and splunkcloud). Has anyone else seen similar issues?
Hi SMEs,   Greeting, i am seeking help to configure splunk to start at boot while SELinux is in enforcing mode. We are running with latest version 8.2.0  Many thanks in advance
I have a stacked bar chart.  The user wanted dark colors (which I did using the code on the bottom) .  However, the labels barely show on the bars.  I would like to change the font to white.  I have ... See more...
I have a stacked bar chart.  The user wanted dark colors (which I did using the code on the bottom) .  However, the labels barely show on the bars.  I would like to change the font to white.  I have googled with no luck and tried a few option changes with no luck.   <option name=“charting.fontColor”>“#FFFFFF”</option> <option name=“charting.backgroundColor”>{“#FFFFFF”}</option> <option name=“charting.backgroundColor”>#FFFFFF</option> <option name=“charting.fieldColors”>{“% Achievement”: #407294, “% Misses”: #A7090F}</option> Please tell me there is an easy solution for what seems to be a simple fix..  Thanks!
I've followed this guide to install SC4S and connect with Splunk: https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/byoe-rhel8/ And I am getting this error: 2021 Nov 11 00... See more...
I've followed this guide to install SC4S and connect with Splunk: https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/byoe-rhel8/ And I am getting this error: 2021 Nov 11 00:56:11 sc4s-hostname01 curl: error sending HTTP request; url='https://10.0.0.1:8088/services/collector/event', error='Couldn\'t connect to server', worker_index='0', driver='d_hec_fmt#0', location='root generator dest_hec:5:5' 2021 Nov 11 00:56:11 sc4s-hostname01 Server disconnected while preparing messages for sending, trying again; driver='d_hec_fmt#0', location='root generator dest_hec:5:5', worker_index='0', time_reopen='10', batch_size='1469' Network connection, token are ok: curl -k https://10.0.0.1:8088/services/collector/event -H "Authorization: Splunk <token>" -d '{"event": "hello world"}' {"text":"Success","code":0}  
I hope you can help me with a dashboard line  visualization I’m trying to make. Here is an example of our logs, which keep count at the end of each line : [db]: 00:05:01.000: newcoteachers:1d 115 ... See more...
I hope you can help me with a dashboard line  visualization I’m trying to make. Here is an example of our logs, which keep count at the end of each line : [db]: 00:05:01.000: newcoteachers:1d 115 [db]: 00:05:01.000: newcoteachers:7d 528 [db]: 00:05:01.000: newcoteachers:30d 1884   How can I chart three lines graph in one splunk dashboard panel to represent these numbers? I feel like I'm close but I've hit a wall and cannot find any documentation to help. The query below only returns the “1d” type. Is it possible to chart the three types? rex field=_raw  "newteachers:(?<type>.*) (?<num>.*)"  | chart last(num) by type   Thanks for any help   Christian
Hi All, Need guidance on how to approach this. I need help with creating an alert that triggers during different times, for instance: Alert will trigger if: If Y-email was sent over 1 day ago ... See more...
Hi All, Need guidance on how to approach this. I need help with creating an alert that triggers during different times, for instance: Alert will trigger if: If Y-email was sent over 1 day ago If Z-email  was sent over 2 days ago  if M-email was sent over 3 days ago  All these triggers will be a part of 1 email... can this be done with cron schedule alone or will the time need to be hard coded in the code itself? Or will I need separate alerts? 
Hello, I need some recommendations on how to extract  limited amount of _raw data based on some search criteria.  My requirements and one sample _raw event/data are given below. Any help will be hig... See more...
Hello, I need some recommendations on how to extract  limited amount of _raw data based on some search criteria.  My requirements and one sample _raw event/data are given below. Any help will be highly appreciated. Thank you! My requirements: Following is one sample event, I have a key search object   "Operation Succeeded"  (see second line of the event from the last) and my objectives from this search 1. Need to get all  events those have  "Operation Succeeded"  2. Need to display one line that has "Operation Succeeded" only. For example, search will display only " ----AUDIT-1044-036936275288 -- 2021/10/05 08:58:24.289 Operation Succeeded"  (will ignore the rest)  for that event and same for rest of the events those have "Operation Succeeded" match. Sample Data: --AUDIT-1044-036936275170 -- 2021/10/05 08:58:24.289 Attempting to set  option 'auditing' ----AUDIT-1044-036936275196 -- 2021/10/05 08:58:24.289 Checking SET ANY OPTION system privilege or authority - OK ----AUDIT-1044-036936275242 -- 2021/10/05 08:58:24.289 Checking SET ANY SECURITY OPTION system privilege or authority - OK ----AUDIT-1044-036936275288 -- 2021/10/05 08:58:24.289 Operation Succeeded ----AUDIT-1044-036936275305 -- 2021/10/05 08:58:24.289 Auditing Disabled
I'm looking to have Cisco Firepower App for Splunk populated with Any Connect VPN users. I would like to have the "Device Overview" dashboard populate the information.  
As the title says, I installed the PUR app on ES and non-ES SHs. The app did run and return results on non-ES SH but not on ES SH.  Can somebody please explain what might be the potential reason beh... See more...
As the title says, I installed the PUR app on ES and non-ES SHs. The app did run and return results on non-ES SH but not on ES SH.  Can somebody please explain what might be the potential reason behind this or how I can fix this ?
my container starts behind nginx (web ssl deactivated), but then fails and restarts every minute: FAILED - RETRYING: Test basic https endpoint (60 retries left). since my nginx routes www.mysplunks... See more...
my container starts behind nginx (web ssl deactivated), but then fails and restarts every minute: FAILED - RETRYING: Test basic https endpoint (60 retries left). since my nginx routes www.mysplunkserver.com:443/80 to container, :8000 is not routed for now. Is there a way to  deactivate basic https endpoint test? [settings] enableSplunkWebSSL = 0 httpport = 8000 tools.proxy.on = true
I recently performed a data migration to correct some mistakes made by the person who built our environment. Afterward, I found I had to run `splunk fsck repair` due to errors that are preventing spl... See more...
I recently performed a data migration to correct some mistakes made by the person who built our environment. Afterward, I found I had to run `splunk fsck repair` due to errors that are preventing splunk from starting. After running the command with "--all-buckets-all-indexes" or "--all-buckets-one-index --index-name=linux" it stops without seeming to do anything. After it stops I get, as an example Process delayed by 56.174 seconds, perhaps system was suspended? Stopping WatchdogThread. We have four indexers in a cluster. I've put the cluster master in maintenance mode and stopped splunk on all of the indexers. I'm running the command on a single indexer since the data is shared via NFS. One thing I haven't done is unmount the share on all of the other indexers. What is the cause of this error and what do I need to do to move past it?
I recently had to realign our storage. Specifically, write cold data to one NFS share and hot/warm to another. Prior to this, all data was being written to the same storage which was not per our desi... See more...
I recently had to realign our storage. Specifically, write cold data to one NFS share and hot/warm to another. Prior to this, all data was being written to the same storage which was not per our design. I placed our cluster master in maintenance-mode, stopped splunk on all indexers, then used rsync to copy data to the proper shares. After moving data around and ensuring that NFS shares were then mounted in the proper locations, I attempted to bring everything back online. The cluster master starts fine. The indexers, though, do not. I have only been able to start one indexer out of four. It seems to not be one specific indexer, though. I had splunk running on indexer1, but indexer2, indexer3, and indexer4 then failed. Later, I was able to start splunk on indexer2, but indexer1, indexer3, and indexer4 failed. Examples of the errors I'm seeing are ERROR STMgr - dir='/splunk/audit/db/hot_v1_64' st_open failure: opts=1 tsidxWritingLevel=1 (No such file or directory) ERROR StreamGroup - Failed to open THING for dir=/splunk/audit/db/hot_v1_64 exists=false isDir=false isRW=false errno='No such file or directory' Your .tsidx files will be incomplete for this bucket, and you may have to rebuild it. ERROR StreamGroup - failed to add corrupt marker to dir=/splunk/audit/db/hot_v1_64 errno=No such file or directory and ERROR HotDBManager - Could not service the bucket: path=/splunk/_introspection/db/hot_v1_388/rawdata not found. Remove it from host bucket list. WARN  TimeInvertedIndex - Directory /splunk/_introspection/db/hot_v1_388 appears to have been deleted FATAL MetaData - Unable to open tempfile=/splunk/_introspection/db/hot_v1_388/Strings.data.temp for reason="No such file or directory";  this=MetaData: {file=/splunk/_introspection/db/hot_v1_388/Strings.data description=Strings totalCount=761 secsSinceFullService=0 global=WordPositionData: { count=0 ET=n/a LT=n/a mostRecent=n/a } and FATAL HotDBManager - Hot bucket with id=389 already exists. idx=_introspection dir=/splunk/_introspection/db/hot_v1_389 I've run 'splunk fsck repair --all-buckets-all-indexes' more than once, but these issues persist. Can the underlying issues be corrected or should we cut our losses and start our collections fresh? Fortunately, this is an option we can use as a last resort.
Hi,    I am looking for a solution to check the splunk query results . if it returns '0' events i need to trigger an alert. Please provide a query to check when count value is zero. Thanks.
This I know is a stupid question, but here it goes anyways, hoping someone solved this problem in the past. Does anyone know how to undo the changes to a lookup when accidently using | outputlookup ?... See more...
This I know is a stupid question, but here it goes anyways, hoping someone solved this problem in the past. Does anyone know how to undo the changes to a lookup when accidently using | outputlookup ? I accidently overwrote and committed changes to my lookup and have been trying to find a way to revert my changes. Please help anyone...