All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am running the ldapsearch in scheduled report  which initially runs outputlookup and getting the below error message  the ldapsearch returns 250 results and working properly once I am running it ... See more...
I am running the ldapsearch in scheduled report  which initially runs outputlookup and getting the below error message  the ldapsearch returns 250 results and working properly once I am running it manually  05-26-2022 04:08:16.147 +0300 ERROR SearchMessages - orig_component="script" app="amdocscybermain" sid="scheduler__odeliab__amdocscybermain__RMD59063549f9a2aae97_at_1653527160_38496" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'ldapsearch' returned error code 15. . app = amdocscybermain component = SearchMessages date_zone = 180 eventtype = err0r error host = illinissplnksh01 index = _internal log_level = ERROR message = External search command 'ldapsearch' returned error code 15. . sid = scheduler__odeliab__amdocscybermain__RMD59063549f9a2aae97_at_1653527160_38496 source = /opt/splunk/var/log/splunk/search_messages.log sourcetype = splunk_search_messages
Hi All, I set ignoreOlderThan = 10d and it worked as expected, the files older than 10 days were not searched. Once I set that value to 30d, all files came out. So far it is working as expected. ... See more...
Hi All, I set ignoreOlderThan = 10d and it worked as expected, the files older than 10 days were not searched. Once I set that value to 30d, all files came out. So far it is working as expected. However, after I set it back to 10d, there was no difference and all files including those ones older than 10 days came out as well, is this as expected? I have restarted both the UF and server. Thanks.
Hi All, I am using Splunk Stream App for long time, suddenly some problem raised: 1-Netflow not working and no data is indexed. even after installing New Version of Stream App (8.0.2) configuration... See more...
Hi All, I am using Splunk Stream App for long time, suddenly some problem raised: 1-Netflow not working and no data is indexed. even after installing New Version of Stream App (8.0.2) configuration page does not load. when I click on "new stream\metadata stream" nothing loads so i cant configure the app. 2- I see these alerts. and i did everything that are said in community such as "rename the current mongo folder to old" or "splunk clean kvstore --local". but nothing worked for me. Failed to start KV Store process. See mongod.log and splunkd.log for details. KV Store changed status to failed. KVStore process terminated.. KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details. Thank You  
Hello Splunkers!! Can you help me understand that how can I compare the last week vs 3 hours of data in Splunk.  Previously I have compared the current week and previous week of data by using the ... See more...
Hello Splunkers!! Can you help me understand that how can I compare the last week vs 3 hours of data in Splunk.  Previously I have compared the current week and previous week of data by using the timewrap command but last week vs 3 hours in creating confusion for me. Please provide me the solution and suggestion. Below screenshot belongs to Newrelic.
How do I generate SSL certificate for the Event Service server?  There are documentation on how to secure the  Controller and EUM servers but I didn't find documentation on how to generate the .CSR c... See more...
How do I generate SSL certificate for the Event Service server?  There are documentation on how to secure the  Controller and EUM servers but I didn't find documentation on how to generate the .CSR certificate for the event service.  Our Event Service SSL certificate has expired  and I am not sure how it was created and imported on the keystore.  Thank you. Ferhana
I created AWS EC2 instance and installed Splunk Enterprise on that. Opened all rules for port 8000 and 8089. I can open this Splunk GUI from India. But whenever my peer is trying to open from US he g... See more...
I created AWS EC2 instance and installed Splunk Enterprise on that. Opened all rules for port 8000 and 8089. I can open this Splunk GUI from India. But whenever my peer is trying to open from US he got the message "Server Error". Is this anything related to EC2 security groups? Or password issue? No logs are recording in internal data as well. Could you please us to fix this?
Hi friends, I just would like to know if I need a different HEC token for every source type? I couldn't find any documentation related to it Thanks.
Hi Splunkers, Is it possible to make a dynamic token results based on the radio and multiple link with same token value. I want to achieve a result like this. If I click value "1" on the radi... See more...
Hi Splunkers, Is it possible to make a dynamic token results based on the radio and multiple link with same token value. I want to achieve a result like this. If I click value "1" on the radio button, link 1 input will show up and the default value for this link input will pass on the table input it should be "A". If I change the value "1" to value "2", link 2 input will show up and default value for it will update the table it should be "D" Below is my sample query. Appreciate your help on this. Thanks inadvance! <form> <label>Multi Link Input</label> <fieldset submitButton="false"> <input type="radio" token="select_link" searchWhenChanged="true"> <label>Select Link List</label> <choice value="1">1</choice> <choice value="2">2</choice> <change> <condition value="2"> <set token="show_2">true</set> <unset token="show_1"></unset> </condition> <condition value="1"> <set token="show_1">true</set> <unset token="show_2"></unset> </condition> </change> <default>1</default> </input> <input type="link" token="select" searchWhenChanged="true" depends="$show_1$"> <label>1</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <default>A</default> </input> <input type="link" token="select" searchWhenChanged="true" depends="$show_2$"> <label>2</label> <choice value="D">D</choice> <choice value="E">E</choice> <choice value="F">F</choice> <default>D</default> </input> </fieldset> <row> <panel> <event> <search> <query>| makeresults | eval $select$ = "$select$"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form>
Hi, Paloalto is one of our largest log sources, and we have been ingesting many different types of pan logs for years via the Splunk_TA_paloalto add-on for Splunk. The firewalls are sending logs to... See more...
Hi, Paloalto is one of our largest log sources, and we have been ingesting many different types of pan logs for years via the Splunk_TA_paloalto add-on for Splunk. The firewalls are sending logs to a syslog server also functioning as a UF. On 04/14/22 we noticed that the pan:threat sourcetype has started to grow in volume. Its the roughly the same amount of events, but now the events are on average x2, x3, up to x5 larger in size of bytes.  I also noticed that some of the fields are receiving the wrong data. When I track this back, both issues started happening on 4/14. I have also determined that these larger logs are all coming from one HA pair, out of dozens.  I am having a very tough time coming up with explanations for the growth, and options to fix the issue on the Splunk side. Has anyone every seen this or have any recommendations on how I may resolve the issue?
I have a field called "Risk Type" that has categorical data associated with the type of risk of an event. For example, for one event it might say "Type - Network", but for another event that has more... See more...
I have a field called "Risk Type" that has categorical data associated with the type of risk of an event. For example, for one event it might say "Type - Network", but for another event that has more than one risk type it will say "Type - Network Type - USB Type - Data" where the three risk types are in a single value. What I want to do is to extract each type as a separate value, so for event X there would be three entries for each type. Ex: Event X Type - Network Event X Type - USB Event X Type - Data I tried doing mvexpand but this did not separate each type into multiple values. I also thought of using the rex command but I do not know what the regular expression would be to do this. How do I accomplish this?
I've done this in the past and it works to get data for today up to the latest 5 minute span, but I'm hoping to speed it up with tstats.   index="foo" sourcetype="foo" earliest=-0d@d latest=[|maker... See more...
I've done this in the past and it works to get data for today up to the latest 5 minute span, but I'm hoping to speed it up with tstats.   index="foo" sourcetype="foo" earliest=-0d@d latest=[|makeresults | eval snap=floor(now()/300)*300 | return $snap] | stats sum(b) as bytes ....     I tried this but it doesn't work.   | tstats sum(stuff.b)as bytes from datamodel="mymodel.stuff" where index="foo" sourcetype="foo" earliest=-0d@d latest=[|makeresults | eval snap=floor(now()/300)*300 | return $snap] | ....      I could do this potentially but it doesn't seem to be much better and quite frankly is a bit more confusing.   | tstats sum(stuff.b)as bytes from datamodel="mymodel.stuff" where index="foo" sourcetype="foo" earliest=-0d@d by _time span=1min | where _time < floor(now()/300)*300 | rename stuff.* as * | stats sum(bytes) as bytes ....     If there is anyway to do it in the tstats  command that would be great ... thoughts? 
Hi there, I am new to Splunk, so the question could be silly.... We set up an alert to alert out the on-call team once the first log of the day with the keyword "down" is detected by Splunk. However,... See more...
Hi there, I am new to Splunk, so the question could be silly.... We set up an alert to alert out the on-call team once the first log of the day with the keyword "down" is detected by Splunk. However, it is very chatty. I wonder if it is possible to make an alert like below. 1. If the daily scan finds multiple "down" message in the past 24 hours, it only considers the most recent "down" message. 2. And Splunk will search for the following 7 days if there are any "up" messages. 3. Splunk only considers the most recent "up" message and as long as the time stamp of the "up" message is more recent than the "down" message, Splunk doesn't alert. Otherwise, it alerts the on-call team. The most difficult parts for me are: 1. How to trigger another query if the daily schedules find the down message. 2. How to keep the query running for the following 7 days. Any help would be much appreciated. Thank you,
I am having trouble getting this case statement to work (I receive "Error in eval command"):   | eval match=case(cidrmatch("10.xx.x.0/16",asset_ip),"groupA", cidrmatch("10.xx.x.0/16",asset_ip),... See more...
I am having trouble getting this case statement to work (I receive "Error in eval command"):   | eval match=case(cidrmatch("10.xx.x.0/16",asset_ip),"groupA", cidrmatch("10.xx.x.0/16",asset_ip),"groupA", cidrmatch("192.xx.xx.0/25",asset_ip),"groupA", cidrmatch("192.xxx.xx.0/24",asset_ip),"groupA", cidrmatch("10.xx.xx.0/24",asset_ip),"groupB", cidrmatch("192.xx.x.0/24",asset_ip),"groupB", cidrmatch("192.xxx.xx.0/24",asset_ip),"groupB", cidrmatch("192.xxx.xx.0/24",asset_ip),"groupB", cidrmatch("10.xx.x.0/16",asset_ip),"groupC", cidrmatch("10.xx.x.0/16",asset_ip),"groupC", cidrmatch("10.xxx.x.0/16",asset_ip),"groupC", cidrmatch("10.xxx.x.0/16",asset_ip),"groupC", cidrmatch("10.xxx.x.0/16",asset_ip),"groupC", cidrmatch("10.xxx.x.0/16",asset_ip),"groupC", cidrmatch("10.xxx.x.0/16",asset_ip),"groupC"), "Other")         I can't seem to figure out why this isn't working. Is 'case' the wrong statement to use here?  
Hey All, I have the following issue.  There is a lookup table I am using within a query in where some items are returned with quotes and some are not.  I have the following query and I need to ensu... See more...
Hey All, I have the following issue.  There is a lookup table I am using within a query in where some items are returned with quotes and some are not.  I have the following query and I need to ensure all the results are returned without quotes.   My final query looks as follows:   I know these quotes are causing issues with my final lookups.  Any help is greatly appreciated.  
Receiving the error below. The app used to work with the same credentials but we are now receiving issues. 2022-05-24 15:49:06,518 ERROR pid=16541 tid=MainThread file=SecKit_SA_geolocation_rh_updat... See more...
Receiving the error below. The app used to work with the same credentials but we are now receiving issues. 2022-05-24 15:49:06,518 ERROR pid=16541 tid=MainThread file=SecKit_SA_geolocation_rh_updater.py:post_update:157 | Exception generated when attempting to backup a lookup file Traceback (most recent call last): File "/opt/splunk/etc/apps/SecKit_SA_geolocation/bin/SecKit_SA_geolocation_rh_updater.py", line 122, in post_update stderr=subprocess.STDOUT, File "/opt/splunk/lib/python3.7/subprocess.py", line 411, in check_output **kwargs).stdout File "/opt/splunk/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['$SPLUNK_HOME/etc/apps/SecKit_SA_geolocation/bin/geoipupdate/linux_amd64/geoipupdate -v -d /opt/splunk/etc/apps/SecKit_SA_geolocation/data/ -f /tmp/GeoIPij8vhpu5.conf']' returned non-zero exit status 126. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/SecKit_SA_geolocation/bin/SecKit_SA_geolocation_rh_updater.py", line 124, in post_update except CalledProcessError as e: NameError: name 'CalledProcessError' is not defined  
Hello, Specs:  Splunk Enterprise 8.2.1 Server OS: RHEL 7.9 I have a distributed installation of Splunk Enterprise on RHEL 7.9 which comes with its own version of python, Splunk also comes... See more...
Hello, Specs:  Splunk Enterprise 8.2.1 Server OS: RHEL 7.9 I have a distributed installation of Splunk Enterprise on RHEL 7.9 which comes with its own version of python, Splunk also comes with two more versions of python. I am creating an external lookup that runs a python script which performs an API call and retrieves the values based on the input from the user in the Splunk Search. My goal is to install an isolated version of Python 3 on the server. To achieve this I need to install Python 3 from source, and in order to compile the source code I need to install "Development Tools" along with other software that will be detailed below. Main concern: I am not sure if installing this tools might negatively affect the behavior of Splunk or the OS Python. This Splunk instance is located on a critical infrastructure, and there is no margin for error. For reference below are the steps to be performed. Download Python from source: https://www.python.org/downloads/source/ Create directory for new python installation: opt/ti_scripts/python3.10.4 Install tools for compiling code: sudo yum groupinstall "Development Tools" -y Additional compiling tools: sudo yum install gcc openssl-devel libffi-devel bzip2-devel -y Decompress python tar: tar xvf Python-3.10.4.tgz Go to decompressed directory: cd Python-3.10.4 Specify location for the new python installation: sudo ./configure --enable-optimizations --prefix=/opt/ti_scripts/python3.10.4 Install without altering default python: sudo make altinstall Create a soft link for the new python: sudo ln -s /opt/ti_scripts/python3.10.4 usr/bin/python3.10.4 Please help me with this situation, thanks.
I have a DCS with Splunk Add-on for VMware with 2 DCN. For some reason, it stopped ingesting data for two days. Is it possible to backfill the data for the two days it missed?
Hi All, I have created a summary index . I am making use of "sistats count by <fields>" to populate all the fields required. And I see those fields as well.  The issue is - On this index I am t... See more...
Hi All, I have created a summary index . I am making use of "sistats count by <fields>" to populate all the fields required. And I see those fields as well.  The issue is - On this index I am trying to use chart command and also stats count(<field>) as test (chart command in one query and stats count in another query) but its not working. There is no results returned. Instead I use stats command and populate data to summary index , both commands are working. Please let me know why chart and stats command are not working on the summary index that I have created using sistats command . [sichart as well not working]. I am missing some technical information here. Regards, PNV
Hi,  I am trying to create a query to get all values that are larger than the average value. I have a file size field and I need to find all the files that are larger than the average file size. 
Hi When I create Splunk apps which derive from React components, I usually create the React component on the Command Line and start Splunk via the Command Line also, resulting in Splunk been shown... See more...
Hi When I create Splunk apps which derive from React components, I usually create the React component on the Command Line and start Splunk via the Command Line also, resulting in Splunk been shown on my localhost. Is it possible to create such React components on an IDE like Visual Studio in conjunction with Splunk? Thanks