All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Imagine a splunk cloud setup, where you have two or more search heads (say H1, H2,...) running on a single stack of indexers. Now, if you create an index from the splunk UI on head H1, does it get au... See more...
Imagine a splunk cloud setup, where you have two or more search heads (say H1, H2,...) running on a single stack of indexers. Now, if you create an index from the splunk UI on head H1, does it get automatically propagated to other heads?
Hi, I'm getting this message from save_container(): "Container addition failed, reason from server: Asset myasset (7) does not support ingestion." I've added all roles and users in the asset settin... See more...
Hi, I'm getting this message from save_container(): "Container addition failed, reason from server: Asset myasset (7) does not support ingestion." I've added all roles and users in the asset settings (including automation) but it still fails with the same error How do I enable the ingestion? Thanks PS: This is Splunk Phantom version 4.9.34514, there is no "Ingest Settings" tab as mentioned in https://docs.splunk.com/Documentation/Phantom/4.9/Admin/AppsAssets
Hi - I have a query as below index=xxx "Project Id" | rex field=_raw "Project\s*Id\s*-\s*(?<ProjectID>\d+)" | eval eventTime=strftime(_time, "%m/%d/%Y  %H:%H:%S")  | table eventime ProjectId   ... See more...
Hi - I have a query as below index=xxx "Project Id" | rex field=_raw "Project\s*Id\s*-\s*(?<ProjectID>\d+)" | eval eventTime=strftime(_time, "%m/%d/%Y  %H:%H:%S")  | table eventime ProjectId   It presents the table perfect - basically when a Project does anything on the system. I would like a heatmap if possible - when I select the one built in - it just shows the highest project ID Number - I would like an heatmap of the time I guess so I know when the system is being used. The other thing is when I do a visualization - lets say the Project ID is 2000 - it shows as 2,000 in the Bar chart or any other chart. 
Hello. Good afternoon.  Looking to troubleshoot a DB Connect error "cannot communicate with task server".  Below is the error messages received ... 2020-12-09 09:04:27.033 -0600 [Thread-15] INFO or... See more...
Hello. Good afternoon.  Looking to troubleshoot a DB Connect error "cannot communicate with task server".  Below is the error messages received ... 2020-12-09 09:04:27.033 -0600 [Thread-15] INFO org.eclipse.jetty.server.AbstractConnector - Stopped application@b54cfd8{HTTP/1.1,[http/1.1]}{127.0.0.1:9998} 2020-12-09 09:04:27.035 -0600 [Thread-15] INFO com.splunk.dbx.server.task.DefaultTaskService - action=try_to_graceful_stop_task_service 2020-12-09 09:04:27.035 -0600 [Thread-15] INFO org.quartz.core.QuartzScheduler - Scheduler QuartzScheduler_$_NON_CLUSTERED shutting down. 2020-12-09 09:04:27.035 -0600 [Thread-15] INFO org.quartz.core.QuartzScheduler - Scheduler QuartzScheduler_$_NON_CLUSTERED paused. 2020-12-09 09:04:27.519 -0600 [Thread-15] INFO org.quartz.core.QuartzScheduler - Scheduler QuartzScheduler_$_NON_CLUSTERED shutdown complete. 2020-12-09 09:04:27.519 -0600 [Thread-15] INFO c.s.dbx.server.managedobject.CheckpointCleaner - action=stop_checkpoint_cleaner 2020-12-09 09:04:27.520 -0600 [Thread-15] INFO c.s.d.s.api.service.database.ConnectionCleaner - action=stop_datasource_clean_up_task 2020-12-09 09:04:27.540 -0600 [Thread-15] INFO org.eclipse.jetty.server.handler.ContextHandler - Stopped i.d.j.MutableServletContextHandler@809f75c{/,null,UNAVAILABLE} Checked the DB Connect configuration include JRE Installation Path, JVM Options and Task Server Port (9998) and see no issues.  How can this issue be resolved? Regards, Max  
I have found different articles that query and return information about universal forwarders. But I am trying to get a list of heavy forwarders that the splunk system has. Does anyone have a simple s... See more...
I have found different articles that query and return information about universal forwarders. But I am trying to get a list of heavy forwarders that the splunk system has. Does anyone have a simple script that would give me a list of all heavy forwarders. 
Basically data looks like this.  I want to calculate the average time to complete an order across many orders each with different amounts of items.     _time,orderId,"receivedOrder" _time,itemId,o... See more...
Basically data looks like this.  I want to calculate the average time to complete an order across many orders each with different amounts of items.     _time,orderId,"receivedOrder" _time,itemId,orderId,"completedItemProcessing" _time,itemId,orderId,"completedItemProcessing" _time,itemId,orderId,"completedItemProcessing" I have a query that works, but its hugely inefficient and throws errors due to hitting the stats limit.  There has to be a better way to do this, streamstats or delta just gave me the duration from the previous event in the chain and I need to have that evaluate against the order time for all items in an order, not the previous item.   index=index "@mt"="itemComplete" OR @mt="orderReceived" | fillnull value=orderReceived itemComplete | stats earliest(_time) as orderReceived list(itemId) as itemId,list(_time) as prepTime by orderId | mvexpand prepTime | eval timeToPrep=prepTime-orderReceived | stats avg(timeToPrep) as avgItemCompletedDuration | where startOfPreperation>0      
Hello everyone,  I would like to upgrade the OS of the Splunk architecture. Currently we have the architecture on Windows Server 2012 and we want to upgrade to Windows server 2016. This is because ... See more...
Hello everyone,  I would like to upgrade the OS of the Splunk architecture. Currently we have the architecture on Windows Server 2012 and we want to upgrade to Windows server 2016. This is because in order to install Splunk Enterprise 8.x I need to haver Win server 2016. Architecture: SH cluster + Indexer cluster + DS + CM Is there any cosiderations that I have to take in count? Thanks in advance @woodcock 
I have an application that is enabled in my Splunk application, but has been archived in Splunkbase.  Does anyone know how I can find out if the application is still being used in a search or dashboa... See more...
I have an application that is enabled in my Splunk application, but has been archived in Splunkbase.  Does anyone know how I can find out if the application is still being used in a search or dashboard?  I don't want to remove the app before discovering how it would affect my Splunk environment.
I have dashboard panels which set token values with $result.<field_name>$, however our environment is a little congested and the searches will sit at 99% or 98.9 or 100% but won't actually finalize u... See more...
I have dashboard panels which set token values with $result.<field_name>$, however our environment is a little congested and the searches will sit at 99% or 98.9 or 100% but won't actually finalize until maybe 60-90 seconds later. I see the results are there and are what i'm expecting, however the tokens won't set unless the panel searches are finalized. I've tried placing those tokens in the below        <progress> <condition match="'job.resultCount' &gt; 0"> <set token="abc">$result.field1$</set> <set token="cba">$result.field2$</set> </condition> </progress>       but it still waits until the search finalizes before my other panels can utilize those tokens.   is there anyway to finalize panels after a set amount of time with an 'option' tag in the xml? or something?
I have configured the FMC and encore , on running the test from Splunk CLI using .splencore.sh test , it is successful. On checking the Search and Reporting for "sourcetype="cisco:estreamer:log" , i... See more...
I have configured the FMC and encore , on running the test from Splunk CLI using .splencore.sh test , it is successful. On checking the Search and Reporting for "sourcetype="cisco:estreamer:log" , i see these exceptions. How can i fix this ? Currently we are not seeing the events sent from FMC . 2020-12-16 13:04:09,058 Service      ERROR    [no message or attrs]: 'EncoreException' object has no attribute 'message'\n'EncoreException' object has no attribute 'message'Traceback (most recent call last):\n  File "/root/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 61, in execute\n    estreamer.Crypto.create( settings = self.settings )\n  File "/root/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/crypto.py", line 104, in create\n    settings.publicKeyFilepath())\n  File "/root/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/crypto.py", line 33, in __init__\n    privateKeyFilepath ))\nestreamer.exception.EncoreException: privateKeyFilepath: /root/splunk/etc/apps/TA-eStreamer/bin/encore/192.168.0.100-8302_pkcs.key does not exist or is not a file\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/root/splunk/lib/python3.7/getpass.py", line 69, in unix_getpass\n    old = termios.tcgetattr(fd)     # a copy to save\ntermios.error: (25, 'Inappropriate ioctl for device')\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/root/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 70, in execute\n    password = getpass.getpass( prompt = definitions.STRING_PASSWORD_PROMPT )\n  File "/root/splunk/lib/python3.7/getpass.py", line 91, in unix_getpass\n    passwd = fallback_getpass(prompt, stream)\n  File "/root/splunk/lib/python3.7/getpass.py", line 126, in fallback_getpass\n    return _raw_input(prompt, stream)\n  File "/root/splunk/lib/python3.7/getpass.py", line 148, in _raw_input\n    raise EOFError\nEOFError\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/root/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/controller.py", line 247, in start\n    diagnostics.execute()\n  File "/root/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/diagnostics.py", line 73, in execute\n    raise estreamer.EncoreException( definitions.STRING_PASSWORD_STDIN_EOF )\nestreamer.exception.EncoreException: Unable to read password from console. Are you running as a background process? Try running in test or foreground mode\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "./estreamer/service.py", line 181, in main\n    self.start( reprocessPkcs12 = args.pkcs12 )\n  File "./estreamer/service.py", line 150, in start\n    self._posix()\n  File "./estreamer/service.py", line 92, in _posix\n    self._loop()\n  File "./estreamer/service.py", line 59, in _loop\n    self.client.start()\n  File "/root/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/controller.py", line 255, in start\n    'description': ex.message\nAttributeError: 'EncoreException' object has no attribute 'message'\n  
Hello, I am trying to create an Alert on Splunk. I want to create an alert so that I am alerted every time a job fails 2 times or more within an hour. We have several different jobs running. Right no... See more...
Hello, I am trying to create an Alert on Splunk. I want to create an alert so that I am alerted every time a job fails 2 times or more within an hour. We have several different jobs running. Right now, I have a table displaying each job with the amount of failures of each.      index=?? uuid=* |search status=success | rex "message=(?<message>.*)" | stats count(eval(status=="failed")) AS Failures by workflow_name | table workflow_name, Failures     This displays something like :  workflow_name        Failures workflow_1                 3 workflow_2                 1 workflow_3                7 How can I fix this to filter and only include the workflows that have failed more than once (workflow_1 & workflow_3) and within a specific time frame - 1 hr.  Additionally, I want to pull in info about the specific workflow with the latest failure (for ex: message, uuid, etc). For ex:   workflow_name        Failures.       Latest message       Latest uuid  workflow_1                 3                        error msg                    12345678 workflow_3                7                          error msg                  98765432  
We have used the wildcard option for metric definitions, however, we have a need to separate segments that start with a letter rather than a number. Is there a way to add regex to data metric name... See more...
We have used the wildcard option for metric definitions, however, we have a need to separate segments that start with a letter rather than a number. Is there a way to add regex to data metric name segment interpretation?   (if not, this would make a great feature). e.g.  We have this metric name, which works well,   hostname/Custom Metrics/SpecialApp/somename/component/*/metricname However, we want to filter out all the "*" segment values that numbers rather than text string names. Is it possible to do something like this?  hostname/Custom Metrics/SpecialApp/somename/component/[a-z]*/metricname We actually tried this and it doesn't work. It seems the interpreter fails. Any other ideas?  ^ Edited by @Ryan.Paredez for readability. 
Hello All, We have a requirement of integration of Dynatrace (SAAS service) and Splunk 8.0.0(On Premise). Currently we have installed the Dynatrace App and Dynatrace Add-on from Splunk base ,define... See more...
Hello All, We have a requirement of integration of Dynatrace (SAAS service) and Splunk 8.0.0(On Premise). Currently we have installed the Dynatrace App and Dynatrace Add-on from Splunk base ,defined 2 new indexes, configured the inputs for events and metrics as well using the URL of Dynatrace SAAS service, token and default port from API management of Dynatrace However we are not able see the data flow. Kindly suggest please. The port which is used in inputs is a default port  from API management of Dynatrace, should that port be opened on the Splunk (on premise) server? And if we use HEC to get the data , the default port is 8088 , should that also be opened on the Splunk (on premise) server?  
Hi there, I have a CSV file with the following header and values in it. There are some empty values for a field too. The line which has an empty field value isn't extracting correctly. Ex: Field1  ... See more...
Hi there, I have a CSV file with the following header and values in it. There are some empty values for a field too. The line which has an empty field value isn't extracting correctly. Ex: Field1  Field2  Field3  Field4  Field5   Field6   Field7  Field8       abc        123       gfdj       8583   djhcsh  jdcjhd   dcu      jxnchsdi       dabc      1423   ggfdj     98583  kjdcjhd          nchsdi     sjdkvv In the example the line 1 is extracting properly, but the line 2 where the field6 is empty isn't extracting properly. Any help is appreciated. Thanks My props.conf [sourcetype] INDEXED_EXTRACTIONS = CSV FIELD_DELIMITER=, HEADER_FIELD_LINE_NUMBER = 1 SHOULD_LINEMERGE = false
Hi Team, I have a logfile in which I have few keywords such as ORA-1 , ORA-212, ORA-609 and similarly we have more than 100  information related to ORA- value with it. So during the search  we want... See more...
Hi Team, I have a logfile in which I have few keywords such as ORA-1 , ORA-212, ORA-609 and similarly we have more than 100  information related to ORA- value with it. So during the search  we want to exclude the below mentioned ORA details  ORA-609 ORA-3136 ORA-12008 ORA-0 And the other ORA- stuffs needs to be displayed  while searching the logs so that we can create Alerting and schedule the same. i.e. If other than ( ORA-609 , ORA-3136, ORA-12008, ORA-0) and the remaining ORA- should  be displayed as events so I can able to create the alerting for the same. index=abc sourcetype=def  host=xxx So kindly help with the query.
I have a need to find a user(s) that have multiple infections over a 7 day period.  Example would be user1 has an infection today, I need to go back 7 days to see if this user has had any other infec... See more...
I have a need to find a user(s) that have multiple infections over a 7 day period.  Example would be user1 has an infection today, I need to go back 7 days to see if this user has had any other infections. My thought was to run a subsearch for looking back 7 days to "today at midnight" and find user(s) that have an infection event.  Then, run the same search for "today" (midnight to "now"), and if the user from the sub-search is found and only that user, alert.  I want to combine or display all of the events from both searches. Example would be: user1 - events on: (4 events)12/10/2020, (1 event)12/16/2020 user2 - events on: 12/11/2020 user3 - events on 12/10/2020 Run the first search, past 7 days finds all of the above.  The outer search will search for the last 60 minutes and find user1.  I want to alert on and display the 5 events from user1. This will generate an alert.  Throttling will be done in the alert configs to remove duplicates. Thanks as always.
Hi group, Recently upgraded to 8.1.0.1 with single 'all-in-one' configuration.  Yesterday I made a new line entry at the bottom of a long-used Lookup csv file and today it seemed to be ignored.  We ... See more...
Hi group, Recently upgraded to 8.1.0.1 with single 'all-in-one' configuration.  Yesterday I made a new line entry at the bottom of a long-used Lookup csv file and today it seemed to be ignored.  We have a simple search that basically checks for unknown logins (see below) index=msad NOT [| inputlookup SIDLookup.csv | fields SID] | dedup SID Now, even when I searched with "| inputlookup SIDLookup.csv" the last entry did not show up.  I then edited the file again and added a blank new line after my last entry and ensured 'word wrap' was off.  The lookup file is only a four field lookup with nothing crazy (Name,SID,whenCreated,whenChanged).  Each value is enclosed in Double-quotes and comma-separated with no spaces in between. Every other entry is working fine... just not the last one.  Trying to figure out where this is breaking down. Thanks, Greg  
Hello guys, we used this in inputs.conf according to the Splunk CIM compliant addon for Unix and Linux : [monitor:///var/log] whitelist=(messages|secure|auth|maillog|audit\.log|cron) blacklist=... See more...
Hello guys, we used this in inputs.conf according to the Splunk CIM compliant addon for Unix and Linux : [monitor:///var/log] whitelist=(messages|secure|auth|maillog|audit\.log|cron) blacklist=(lastlog|anaconda\.syslog) disabled = 0 index = linux However on UF it still looked for /var/log/anaconda/pre-anaconda.log and others, this looks weird behaviour? Thanks. Splunk enterprise 7.3.4 UF 7.1.4  
Hello Everyone on Splunk Forum. I want to integrate logs from following Fortinet devices 1) Switch, model: FortiSwitch148E-POE 2) Access Points, models: FortiAP 221C and FortiAP 221E I am awa... See more...
Hello Everyone on Splunk Forum. I want to integrate logs from following Fortinet devices 1) Switch, model: FortiSwitch148E-POE 2) Access Points, models: FortiAP 221C and FortiAP 221E I am aware that for Fortinet Firewall ( Fortigate ), i can install TA for that. For Forti Switch I have found following manual:https://kb.fortinet.com/kb/documentLink.do?externalID=FD44999 But what should be done on Splunk side for getting Fortinet logs properly proccessed by Splunk, i could not find addons for that. Thanks BR Dawid
I have Two Different searches in same index, In the first search I have to find using user ID and Session ID But in other second search I have to match my first Session ID with second search result a... See more...
I have Two Different searches in same index, In the first search I have to find using user ID and Session ID But in other second search I have to match my first Session ID with second search result and final O/P will be total count. Can you please help me with above scenario ?