All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've followed this guide to install SC4S and connect with Splunk: https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/byoe-rhel8/ And I am getting this error: 2021 Nov 11 00... See more...
I've followed this guide to install SC4S and connect with Splunk: https://splunk.github.io/splunk-connect-for-syslog/main/gettingstarted/byoe-rhel8/ And I am getting this error: 2021 Nov 11 00:56:11 sc4s-hostname01 curl: error sending HTTP request; url='https://10.0.0.1:8088/services/collector/event', error='Couldn\'t connect to server', worker_index='0', driver='d_hec_fmt#0', location='root generator dest_hec:5:5' 2021 Nov 11 00:56:11 sc4s-hostname01 Server disconnected while preparing messages for sending, trying again; driver='d_hec_fmt#0', location='root generator dest_hec:5:5', worker_index='0', time_reopen='10', batch_size='1469' Network connection, token are ok: curl -k https://10.0.0.1:8088/services/collector/event -H "Authorization: Splunk <token>" -d '{"event": "hello world"}' {"text":"Success","code":0}  
I hope you can help me with a dashboard line  visualization I’m trying to make. Here is an example of our logs, which keep count at the end of each line : [db]: 00:05:01.000: newcoteachers:1d 115 ... See more...
I hope you can help me with a dashboard line  visualization I’m trying to make. Here is an example of our logs, which keep count at the end of each line : [db]: 00:05:01.000: newcoteachers:1d 115 [db]: 00:05:01.000: newcoteachers:7d 528 [db]: 00:05:01.000: newcoteachers:30d 1884   How can I chart three lines graph in one splunk dashboard panel to represent these numbers? I feel like I'm close but I've hit a wall and cannot find any documentation to help. The query below only returns the “1d” type. Is it possible to chart the three types? rex field=_raw  "newteachers:(?<type>.*) (?<num>.*)"  | chart last(num) by type   Thanks for any help   Christian
Hi All, Need guidance on how to approach this. I need help with creating an alert that triggers during different times, for instance: Alert will trigger if: If Y-email was sent over 1 day ago ... See more...
Hi All, Need guidance on how to approach this. I need help with creating an alert that triggers during different times, for instance: Alert will trigger if: If Y-email was sent over 1 day ago If Z-email  was sent over 2 days ago  if M-email was sent over 3 days ago  All these triggers will be a part of 1 email... can this be done with cron schedule alone or will the time need to be hard coded in the code itself? Or will I need separate alerts? 
Hello, I need some recommendations on how to extract  limited amount of _raw data based on some search criteria.  My requirements and one sample _raw event/data are given below. Any help will be hig... See more...
Hello, I need some recommendations on how to extract  limited amount of _raw data based on some search criteria.  My requirements and one sample _raw event/data are given below. Any help will be highly appreciated. Thank you! My requirements: Following is one sample event, I have a key search object   "Operation Succeeded"  (see second line of the event from the last) and my objectives from this search 1. Need to get all  events those have  "Operation Succeeded"  2. Need to display one line that has "Operation Succeeded" only. For example, search will display only " ----AUDIT-1044-036936275288 -- 2021/10/05 08:58:24.289 Operation Succeeded"  (will ignore the rest)  for that event and same for rest of the events those have "Operation Succeeded" match. Sample Data: --AUDIT-1044-036936275170 -- 2021/10/05 08:58:24.289 Attempting to set  option 'auditing' ----AUDIT-1044-036936275196 -- 2021/10/05 08:58:24.289 Checking SET ANY OPTION system privilege or authority - OK ----AUDIT-1044-036936275242 -- 2021/10/05 08:58:24.289 Checking SET ANY SECURITY OPTION system privilege or authority - OK ----AUDIT-1044-036936275288 -- 2021/10/05 08:58:24.289 Operation Succeeded ----AUDIT-1044-036936275305 -- 2021/10/05 08:58:24.289 Auditing Disabled
I'm looking to have Cisco Firepower App for Splunk populated with Any Connect VPN users. I would like to have the "Device Overview" dashboard populate the information.  
As the title says, I installed the PUR app on ES and non-ES SHs. The app did run and return results on non-ES SH but not on ES SH.  Can somebody please explain what might be the potential reason beh... See more...
As the title says, I installed the PUR app on ES and non-ES SHs. The app did run and return results on non-ES SH but not on ES SH.  Can somebody please explain what might be the potential reason behind this or how I can fix this ?
my container starts behind nginx (web ssl deactivated), but then fails and restarts every minute: FAILED - RETRYING: Test basic https endpoint (60 retries left). since my nginx routes www.mysplunks... See more...
my container starts behind nginx (web ssl deactivated), but then fails and restarts every minute: FAILED - RETRYING: Test basic https endpoint (60 retries left). since my nginx routes www.mysplunkserver.com:443/80 to container, :8000 is not routed for now. Is there a way to  deactivate basic https endpoint test? [settings] enableSplunkWebSSL = 0 httpport = 8000 tools.proxy.on = true
I recently performed a data migration to correct some mistakes made by the person who built our environment. Afterward, I found I had to run `splunk fsck repair` due to errors that are preventing spl... See more...
I recently performed a data migration to correct some mistakes made by the person who built our environment. Afterward, I found I had to run `splunk fsck repair` due to errors that are preventing splunk from starting. After running the command with "--all-buckets-all-indexes" or "--all-buckets-one-index --index-name=linux" it stops without seeming to do anything. After it stops I get, as an example Process delayed by 56.174 seconds, perhaps system was suspended? Stopping WatchdogThread. We have four indexers in a cluster. I've put the cluster master in maintenance mode and stopped splunk on all of the indexers. I'm running the command on a single indexer since the data is shared via NFS. One thing I haven't done is unmount the share on all of the other indexers. What is the cause of this error and what do I need to do to move past it?
I recently had to realign our storage. Specifically, write cold data to one NFS share and hot/warm to another. Prior to this, all data was being written to the same storage which was not per our desi... See more...
I recently had to realign our storage. Specifically, write cold data to one NFS share and hot/warm to another. Prior to this, all data was being written to the same storage which was not per our design. I placed our cluster master in maintenance-mode, stopped splunk on all indexers, then used rsync to copy data to the proper shares. After moving data around and ensuring that NFS shares were then mounted in the proper locations, I attempted to bring everything back online. The cluster master starts fine. The indexers, though, do not. I have only been able to start one indexer out of four. It seems to not be one specific indexer, though. I had splunk running on indexer1, but indexer2, indexer3, and indexer4 then failed. Later, I was able to start splunk on indexer2, but indexer1, indexer3, and indexer4 failed. Examples of the errors I'm seeing are ERROR STMgr - dir='/splunk/audit/db/hot_v1_64' st_open failure: opts=1 tsidxWritingLevel=1 (No such file or directory) ERROR StreamGroup - Failed to open THING for dir=/splunk/audit/db/hot_v1_64 exists=false isDir=false isRW=false errno='No such file or directory' Your .tsidx files will be incomplete for this bucket, and you may have to rebuild it. ERROR StreamGroup - failed to add corrupt marker to dir=/splunk/audit/db/hot_v1_64 errno=No such file or directory and ERROR HotDBManager - Could not service the bucket: path=/splunk/_introspection/db/hot_v1_388/rawdata not found. Remove it from host bucket list. WARN  TimeInvertedIndex - Directory /splunk/_introspection/db/hot_v1_388 appears to have been deleted FATAL MetaData - Unable to open tempfile=/splunk/_introspection/db/hot_v1_388/Strings.data.temp for reason="No such file or directory";  this=MetaData: {file=/splunk/_introspection/db/hot_v1_388/Strings.data description=Strings totalCount=761 secsSinceFullService=0 global=WordPositionData: { count=0 ET=n/a LT=n/a mostRecent=n/a } and FATAL HotDBManager - Hot bucket with id=389 already exists. idx=_introspection dir=/splunk/_introspection/db/hot_v1_389 I've run 'splunk fsck repair --all-buckets-all-indexes' more than once, but these issues persist. Can the underlying issues be corrected or should we cut our losses and start our collections fresh? Fortunately, this is an option we can use as a last resort.
Hi,    I am looking for a solution to check the splunk query results . if it returns '0' events i need to trigger an alert. Please provide a query to check when count value is zero. Thanks.
This I know is a stupid question, but here it goes anyways, hoping someone solved this problem in the past. Does anyone know how to undo the changes to a lookup when accidently using | outputlookup ?... See more...
This I know is a stupid question, but here it goes anyways, hoping someone solved this problem in the past. Does anyone know how to undo the changes to a lookup when accidently using | outputlookup ? I accidently overwrote and committed changes to my lookup and have been trying to find a way to revert my changes. Please help anyone... 
I want to find items in one index based on results from another index's search. I have the following but only get a handful of results for some reason.     index=a sourcetype=test |join id [search... See more...
I want to find items in one index based on results from another index's search. I have the following but only get a handful of results for some reason.     index=a sourcetype=test |join id [search index b | rename id as idb] |stats count by id, idb     Is this the best way to accomplish it and any reason I only get a small number of results? 
I have a user that has asked how to get access/permissions to the "export" button while doing a search in Splunk.  It is now showing up for them when they run a search.  The first pic has that option... See more...
I have a user that has asked how to get access/permissions to the "export" button while doing a search in Splunk.  It is now showing up for them when they run a search.  The first pic has that option highlighted.    Below is a screen shot of a search they ran, without the export/download option:   Is this controlled via role or some other setting?    
Hi, I need to join two searchs. For example: Example 1:   | inputlookup join_example1.csv   country product day stock Spain apples 10/10/2022 25 France apples 10/10/2022 22 ... See more...
Hi, I need to join two searchs. For example: Example 1:   | inputlookup join_example1.csv   country product day stock Spain apples 10/10/2022 25 France apples 10/10/2022 22 Spain grapes 10/10/2022 30 France grapes 10/10/2022 28 Spain apples 10/10/2021 25 France apples 10/10/2021 22 Spain grapes 10/10/2021 30 France grapes 10/10/2021 28   Example 2:   | inputlookup join_example2.csv   day product requested 10/10/2022 apples 90 10/10/2021 apples 110 10/10/2022 grapes 100 10/10/2021 grapes 110 If I join bot searchs:   | inputlookup join_example1.csv | join product, day [| inputlookup join_example2.csv] | table product day country stock requested   The result is: product day country stock requested apples 10/10/2022 Spain 25 90 apples 10/10/2022 France 22 90 grapes 10/10/2022 Spain 30 100 grapes 10/10/2022 France 28 100 apples 10/10/2021 Spain 25 110 apples 10/10/2021 France 22 110 grapes 10/10/2021 Spain 30 110 grapes 10/10/2021 France 28 110   But I need the sub search merges only with the first result like this (only in one country): product day country stock requested apples 10/10/2022 Spain 25 90 apples 10/10/2022 France 22 0 grapes 10/10/2022 Spain 30 100 grapes 10/10/2022 France 28 0 apples 10/10/2021 Spain 25 110 apples 10/10/2021 France 22 0 grapes 10/10/2021 Spain 30 110 grapes 10/10/2021 France 28 0   That is only a example, I need only merge subsearchs results once. Anyone knows a solution for this? Thanks!!!
My default timezone is EST. How do I change it so that when other users are using my dashboards they can view it utc time or a different time zone? Or in other words displaying my result in a differe... See more...
My default timezone is EST. How do I change it so that when other users are using my dashboards they can view it utc time or a different time zone? Or in other words displaying my result in a different time zone or adding an offset.
Hello all, I'm not sure what I have been asked to do is achievable.  I'm hoping that someone can advise. We have a Windows 2003 server that cannot have a UF installed as it is not compatible with o... See more...
Hello all, I'm not sure what I have been asked to do is achievable.  I'm hoping that someone can advise. We have a Windows 2003 server that cannot have a UF installed as it is not compatible with our current environment (8.1.6).  Anyway, that aside, I have managed to ingest data using 'open' shares from a UF on a Windows 2016 server to the 2003 server. I now have a request to ingest data from a restricted share on the 2003 server.  I have tried setting up a share from the 2016 server to the 2003 server, but this does not work.  I guess because the UF is not using the same account as the share has been set up under? Can anyone tell me how I can create a share for the Splunk UF to use? Thanks
This has been asked a million times. I've been digging through the various postings but haven't figured out what I'm doing wrong.  I'm able to do a search time extraction using the rex command to ge... See more...
This has been asked a million times. I've been digging through the various postings but haven't figured out what I'm doing wrong.  I'm able to do a search time extraction using the rex command to get a field exactly the way I want it. But then when I try to add it to the field extractors, it's including too much information. I need to extract the LINK_TARGET value from the event below but the USER details are also being included in the field extractor setup.  Hopefully my redactions don't make this impossible for gurus to assist. Search command:    index="index" search_term | rex field=_raw "LINK_TARGET\s:\s(?<link_target>.*)\n"     Data:   2021-11-10 16:03:14.631 INFO [blah] [Country=US] [User=user] [ip] [DefaultLynxMetricsLogger] [blah] [blah] Metrics logging start: key blah_SEARCH_ORIGIN LINK_TARGET : https://www.blah.com/en_US/blah?utm_source=copy&utm_medium=blah&utm_campaign=blah USER : 9999999 Metrics logging end    
@sideview  Hi Nick I am using a join with mstat, but i am hoping that i dont have to. However, i cant crack it - any help would be amazing. Below is the current SPL   | mstats min("mx.process.c... See more...
@sideview  Hi Nick I am using a join with mstat, but i am hoping that i dont have to. However, i cant crack it - any help would be amazing. Below is the current SPL   | mstats min("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid service.type service.name replica.name | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid | join type=left Process_Name _time [| mstats min("mx.replica.status") as Replica WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY "service.name" replica.name service.type | rename "service.name" as service_name | rename "replica.name" as replica_name | eval Process_Name=((service_name . " # ") . replica_name) | table Process_Name, Replica, "service.type", _time | sort 0 - _time | dedup _time Process_Name] | table _time Process_Name Replica cpuPerc service_type       I have tored to make it one mstats but that will not work as in this case min("mx.replica.status") as Replica as no pid so Splunk gives me back blank for this field. SO do i have to use a JOIN?     | mstats mstats min("mx.replica.status") as Replica min("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid service.type service.name replica.name | rename "service.name" as service_name | rename "replica.name" as replica_name | rename "service.type" as service_type | eval Process_Name=((service_name . " # ") . replica_name) | sort 0 - _time | dedup _time pid | table _time Process_Name Replica cpuPerc service_type  
I am currently using an Input token called OS. I have three values for the token:      MAC       Windows      Linux. In my visualization I want to say:  If OS = Mac . Then run this search. ... See more...
I am currently using an Input token called OS. I have three values for the token:      MAC       Windows      Linux. In my visualization I want to say:  If OS = Mac . Then run this search. If OS = Windows. Then Run this search If OS = Linux. Then run this search.   I am aware that the EVAL command has decision logic built into it but I don't think that  it can handle sub searches inside the case. Any help is appreciated Thank you, Mark
Hello Team, need help to write query to check CPU & memory utilization of pods in splunk. Thankyou