All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I decided to spin up my Splunk home environment again, and I'm running into an issue this time while installing my UF 9.0 on my Raspberry Pi. It's a Pi 4 B running Ubuntu 22.04.1 LTS on aarch64... See more...
Hi, I decided to spin up my Splunk home environment again, and I'm running into an issue this time while installing my UF 9.0 on my Raspberry Pi. It's a Pi 4 B running Ubuntu 22.04.1 LTS on aarch64 architecture. I followed install instructions according to the installing a UNIX forwarder page from Splunk, and used the following bundle "splunkforwarder-9.0.0-6818ac46f2ec-Linux-armv8.tgz" . After getting some normal permissions things out of the way, I started the forwarder, this time it's giving me the error:       Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false).       Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'   so after running splunk btool check --debug | grep ' No spec' and 'Invalid' (these are all the errors types btool reported on) it returns the following after a clean install:       No spec file for: /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/app.conf No spec file for: /opt/splunkforwarder/etc/apps/introspection_generator_addon/default/app.conf No spec file for: /opt/splunkforwarder/etc/apps/search/default/app.conf No spec file for: /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/app.conf No spec file for: /opt/splunkforwarder/etc/manager-apps/_cluster/default/indexes.conf No spec file for: /opt/splunkforwarder/etc/system/default/app.conf No spec file for: /opt/splunkforwarder/etc/system/default/conf.conf No spec file for: /opt/splunkforwarder/etc/system/default/federated.conf No spec file for: /opt/splunkforwarder/etc/system/default/telemetry.conf Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false).        I cannot really find answers on this topic. mostly related to other apps that people installed, but I only installed the universal forwarder, nothing else. I also am not sure what is the answer to the invalid key in the stanza for actions.conf and would like to know if there is a fix. I also found the following error, and read  online that it's not impacting the functionality of Splunk, but is there a way to suppress them and how can I be sure that it's not an issue?       Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk /opt/splunkforward       my /opt/ permissions:       splunk@hostname:/opt/splunkforwarder$ ls -lia /opt 148855 drwxr-xr-x 10 splunk splunk 4096 Aug 12 15:47 splunkforwarder       Any help would be appreciated on this. I am trying to get the cleanest start possible, because on my last run I had a problem with the way my data was being ingested (the 'sourcetype too small' problem) and i wasn't able to fix it back then. Kind regards
Hi, This is my first time starting a discussion. Please pardon my mistakes. So I am trying to perform a search where I can sort based  on a series of numbers occurring at the end of a text. example... See more...
Hi, This is my first time starting a discussion. Please pardon my mistakes. So I am trying to perform a search where I can sort based  on a series of numbers occurring at the end of a text. example: index=abc sourcetype=xyz  Entity=HI* Text="*Rejected message received - code 456" index=abc sourcetype=xyz  Entity=HI* Text="*Rejected message received - code 789" index=abc sourcetype=xyz  Entity=HI* Text="*Rejected message received - code 345" So I would like to sort count by the  3 digit code number. Is it possible to do it?
how can solve this ::: (Create a new field called "StartTime" and set the value to seven days ago from today, snapped to the beginning of the day) ???
How to get snmp v3 data from another tool[HP tools] to splunk? The Hp tools has a configuration where it can forward trap in snmpv3  to a splunk HF. But it requires certain credentials which should ... See more...
How to get snmp v3 data from another tool[HP tools] to splunk? The Hp tools has a configuration where it can forward trap in snmpv3  to a splunk HF. But it requires certain credentials which should be configured at HF end. Please let me know what all configurations has to be done at HF end for splunk to receive snmpv3 trap data.
I have below splunk which gets me all entityID's with count index=coreprod pod=xxxx CASE(xxxxxx) event=ack |stats count by entityId |where count>1 I want to list ONLY those entityID's where the d... See more...
I have below splunk which gets me all entityID's with count index=coreprod pod=xxxx CASE(xxxxxx) event=ack |stats count by entityId |where count>1 I want to list ONLY those entityID's where the difference between their occurrence is less than 1hr (0r xx min  
Hi All, We collected Fortinet fortigate logs to splunk. However, the incoming logs are in CEF format but do not match with the add-on, and there is a prefix "FTNTFGT" at the beginning of the fields... See more...
Hi All, We collected Fortinet fortigate logs to splunk. However, the incoming logs are in CEF format but do not match with the add-on, and there is a prefix "FTNTFGT" at the beginning of the fields. I am sharing a sample log below with you, do you need to make a config on the fortigate? <189>Aug 12 13:35:50 xxxx CEF:0|Fortinet|Fortigate|vxxx|00xxx|traffic:forward accept|3|deviceExternalId=xxxIxxxx FTNTFGTeventtime=1660300550574125940 FTNTFGTtz=+0300 FTNTFGTlogid=xxx cat=traffic:forward FTNTFGTsubtype=forward FTNTFGTlevel=notice FTNTFGTvd=xxx src=xxx spt=57425 deviceInboundInterface=xxx FTNTFGTsrcintfrole=lan dst=xxx dpt=18 deviceOutboundInterface=xxx FTNTFGTdstintfrole=wan FTNTFGTsrccountry=xxx FTNTFGTdstcountry=xxx externalId=xxx proto=6 act=accept FTNTFGTpolicyid=xxx FTNTFGTpolicytype=policy FTNTFGTpoluuid=xxxxxxx FTNTFGTpolicyname=xxxx duser=xxxxx FTNTFGTgroup=xxxx FTNTFGTauthserver=xxx app=HTTPS FTNTFGTtrandisp=xxx sourceTranslatedAddress=xxx sourceTranslatedPort=xxxx FTNTFGTappid=xxx FTNTFGTapp=xxxx FTNTFGTappcat=xxxx FTNTFGTapprisk=elevated FTNTFGTapplist=xxx FTNTFGTduration=xxx out=4348 in=2983 FTNTFGTsentpkt=38 FTNTFGTrcvdpkt=xx FTNTFGTsentdelta=123 FTNTFGTrcvddelta=104 FTNTFGTdevtype=Router FTNTFGTmastersrcmac=xxxxx FTNTFGTsrcmac=xxxxFTNTFGTsrcserver=0 @jerryzhao
Hello, I need to get the logs from an external app into my splunk cloud instance , where i can get the agent that i need to install at the linux APP server  ? where is the route where i can... See more...
Hello, I need to get the logs from an external app into my splunk cloud instance , where i can get the agent that i need to install at the linux APP server  ? where is the route where i can find this logs ? the logs should be in a json format    Thanks a lot , have a good day    
Hi, I have the following bar chart: The query for this bar chart is this: | inputlookup Migration-Status-All.csv | search Vendor = "McAfee" | stats count by "Migration Comments" | eve... See more...
Hi, I have the following bar chart: The query for this bar chart is this: | inputlookup Migration-Status-All.csv | search Vendor = "McAfee" | stats count by "Migration Comments" | eventstats sum(count) as Total | eval perc=round(count*100/Total,2) | eval dummy = 'Migration Comments' | chart sum(perc) over "Migration Comments" by dummy I need to have the "In Progress" bar to be yellow and the "Not Started" bar to be red ..... I tried using an eval and Case but didn't work for me. How can this be done? Many thanks!
Trying to set graph colors with fieldColors in options, in Dashboard Studio. Tried to set them in both dataSources and Visualizations to no avail. What am i doing wrong? Tested on Splunk Cloud Ve... See more...
Trying to set graph colors with fieldColors in options, in Dashboard Studio. Tried to set them in both dataSources and Visualizations to no avail. What am i doing wrong? Tested on Splunk Cloud Version: 8.2.2203.2. Whole code for dashboard below {     "dataSources": {         "ds_sourcetype": {             "type": "ds.search",             "options": {                 "query": "index=_internal _sourcetype IN ( splunk_web_access, splunkd_access)\n| timechart count by _sourcetype",                 "fieldColors": {                     "splunk_web_access": "#FF0000",                     "splunkd_access": "#0000FF"                 }             },             "name": "Search_1"         }     },     "visualizations": {         "viz_sourcetype": {             "type": "splunk.line",             "options": {                 "fieldColors": {                     "splunk_web_access": "#FF0000",                     "splunkd_access": "#0000FF"                 },                 "yAxisAbbreviation": "auto",                 "y2AxisAbbreviation": "auto",                 "showRoundedY2AxisLabels": false,                 "legendTruncation": "ellipsisMiddle",                 "showY2MajorGridLines": true,                 "xAxisLabelRotation": 0,                 "xAxisTitleVisibility": "show",                 "yAxisTitleVisibility": "show",                 "y2AxisTitleVisibility": "show",                 "yAxisScale": "linear",                 "showOverlayY2Axis": false,                 "nullValueDisplay": "gaps",                 "dataValuesDisplay": "off",                 "showSplitSeries": false,                 "showIndependentYRanges": false,                 "legendMode": "standard",                 "legendDisplay": "right",                 "lineWidth": 2,                 "backgroundColor": "#ffffff"             },             "dataSources": {                 "primary": "ds_sourcetype"             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "grid",         "options": {},         "structure": [             {                 "item": "viz_sourcetype",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1200,                     "h": 400                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "title": "dashboard_studio_test",     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     } }
hello, i'm looking into how to automate splunk setup for newly spun up servers. As i'm still not the most proficient with the splunk internal configs to determine whats needed and whats not i want ... See more...
hello, i'm looking into how to automate splunk setup for newly spun up servers. As i'm still not the most proficient with the splunk internal configs to determine whats needed and whats not i want some guidance as to which config files i need to alter in order to prepare the newly spun up server to be plugged into the wider splunk deployment. Currently we have a distributed multisite setup and the idea is to have a collection of the configs needed so that we can just alter and push them to the new server given the servers task, be it Indexer, search head or any other server we potentially need. So what i'm asking for is a pointer to which config files that needs to be staged for setup. ( I assume it's mostly the ../system/*.conf files but if there are any others to keep a look out for)
Hello everyone. We are experiencing download and a few upload failures from our Indexers to SmartStore in AWS S3. Graph for the last 24 hours. I previously increased the Cache Manager limits... See more...
Hello everyone. We are experiencing download and a few upload failures from our Indexers to SmartStore in AWS S3. Graph for the last 24 hours. I previously increased the Cache Manager limits from the default of 8 to 128 with a custom server.conf: [cachemanager] max_concurrent_downloads = 128 max_concurrent_uploads = 128 An example of an upload faliure from the splunkd.log file (sourcetype=splunkd source="/opt/splunk/var/log/splunk/splunkd.log" component=CacheManager log_level=ERROR): 08-12-2022 03:37:28.565 +0000 ERROR CacheManager [950069 cachemanagerUploadExecutorWorker-0] - action=upload, cache_id="dma|<INDEX>~925~054DE1B7-4619-4FBC-B159-D4013D4C30AE|C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Network_Sessions", status=failed, reason="Unknown", elapsed_ms=12050 08-12-2022 03:37:28.484 +0000 ERROR CacheManager [950069 cachemanagerUploadExecutorWorker-0] - action=upload, cache_id="dma|<INDEX>~925~054DE1B7-4619-4FBC-B159-D4013D4C30AE|C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Network_Sessions", status=failed, unable to check if receipt exists at path=<INDEX>/dma/de/07/925~054DE1B7-4619-4FBC-B159-D4013D4C30AE/C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Network_Sessions/receipt.json(0,-1,), error="network error" An example of a download failure: 08-12-2022 09:06:44.488 +0000 ERROR CacheManager [1951184 cachemanagerDownloadExecutorWorker-113] - action=download, cache_id="dma|<INDEX>~204~431C8F6B-2313-4365-942D-09051BE286B8|C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Performance", status=failed, reason="Unknown", elapsed_ms=483 We previously had an issue with NACLs in AWS where the S3 IP ranges had been updated but the NACLs were out of date. We have allowed access to all S3 IP ranges in our region. Does anyone have an idea of how I can troubleshoot this so we can reduce, or eliminate the failures? Anyone else had any experience with this?    
I have created Splunk query with time modifiers "earliest" and "latest" ( for eg. earliest="15/01/2022 8 am" latest="15/01/2022 10 pm" ) and also I have selected time range in the time ranger picker ... See more...
I have created Splunk query with time modifiers "earliest" and "latest" ( for eg. earliest="15/01/2022 8 am" latest="15/01/2022 10 pm" ) and also I have selected time range in the time ranger picker (for eg. 23/12/2022 8 am to 23/12/2022 10 pm) Splunk Query:   timeformat="%m-%d-%Y %l:%M %p" earliest="15-01-2022 08:00 AM" latest="15-01-2022 10:00 PM" index="mobileApp" homepage     Time range picker values in UI: From: 23/12/2022 8 am; To: 23/12/2022 10 pm   whenever, I click 'search' button, time range picker overrides the time modifiers earliest/latest values which are used in the Splunk query Question: could you please help me on overriding 'time range picker' values ( I need results between 15/01/2022 8 am to 15/01/2022 10 pm based on 'time modifiers' only) Your answer would be greatly appreciated!
Latest data within a time span. I have a query as below, but I would like to get the latest data for a field within span of 1w.   index=my_index | timechart span=1w estdc(host) by site   I wo... See more...
Latest data within a time span. I have a query as below, but I would like to get the latest data for a field within span of 1w.   index=my_index | timechart span=1w estdc(host) by site   I would like to get the latest data for field "encrypted=false" within the span=1w for all host by site Edit: encrypted=false changed from true Edit 2: Summary of What I am trying to get as clearly articulated by @ITWhisperer  "So my guess was right - this is what the search is basically doing For each week, it gets the latest encryption state for each host on each site Then keeps only those statistics where the state is false Then counts to events (one for each host with encryption false for that week) by week and site" Finally, it reorganises the data into chart format.
Hi, I have a log file in which I have two things functionality and different repositories which use this functionality . I want to calculate average of occurrence of this functionality over each ... See more...
Hi, I have a log file in which I have two things functionality and different repositories which use this functionality . I want to calculate average of occurrence of this functionality over each repository.  The name of the functionality is string Repo 1 A,A,A Repo 2 A,A,A Repo 3 A,A,A,A   The output should be Name of Repo Avg for functionality A 1                               0.3 2                                0.3 3                                0.4    
Hi everyone, I'm starting using Splunk SDK Python. I'm using Python 3.8 and Splunk 9.0 I get error: HTTP 404 action forbidden. I don't understand why and how to fix it. Here is my code:  im... See more...
Hi everyone, I'm starting using Splunk SDK Python. I'm using Python 3.8 and Splunk 9.0 I get error: HTTP 404 action forbidden. I don't understand why and how to fix it. Here is my code:  import splunklib.client as client import splunklib.results as results def connect_to_splunk(username, password, host='localhost', port='8089', owner='admin', app='search', sharing='user'): try: service = client.connect(host=host, port=port, username=username, password=password, owner=owner, app=app, sharing=sharing) if service: print("Connected successfully!") return service except Exception as e: print(e) def run_normal_mode_search(splunk_service, search_string, payload={}): try: job = splunk_service.jobs.create(search_string, **payload) # print(job.content) # check if the job is completed or not while True: while not job.is_ready(): pass if job["isDone"] == "1": break for result in results.ResultsReader(job.results()): print(result) except Exception as e: print(e) def main(): try: splunk_service = connect_to_splunk(username='xxx', password='xxx') search_string = "search index= haindex1 |top host" payload = {"exec_mode": "normal"} run_normal_mode_search(splunk_service, search_string, payload) except Exception as e: print(e) if _name_ == "_main_": main() Here is the result: Connected successfully! HTTP 404 Not Found -- Action forbidden. Process finished with exit code 0   Thanks and have a nice day! Julia  
I have been monitoring a few Windows hosts with Splunk Universal Forwarder installed. I have setup a deployment server on a linux host to manage configurations on these hosts. Recently, I have moved ... See more...
I have been monitoring a few Windows hosts with Splunk Universal Forwarder installed. I have setup a deployment server on a linux host to manage configurations on these hosts. Recently, I have moved one of these windows hosts to another subnet. Then I found the deployment server cannot receive any phonehome from this host. Then I checked splunkd.log and splunkd_access.log, found no log with the windows host's hostname/IP observed. However, on the Linux host I run tcpdump and found the Windows is actually sending traffic to the deployment server's port 8089. So the regular phonehome message is actually sent to the deployment server but cannot "recognize" it as phonehome message. Do you have any idea what could possibly go wrong? I have actually re-installed the universal forwarder on that host but the issue is not solved. Splunk version is v8.1
Hi-  We have *nix server (ec2 instance) in AWS.  How can we forward one of the application log files from this ec2 instance to our Splunk Cloud instance ?   I am bit confused about the approach of ... See more...
Hi-  We have *nix server (ec2 instance) in AWS.  How can we forward one of the application log files from this ec2 instance to our Splunk Cloud instance ?   I am bit confused about the approach of using Universal Forwarder. As per https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Admin/Configureinputs ;  the UF needs to point (via outputs.conf) to the indexer tier.  But the indexer tier is all managed by Splunk themselves and we don't have any visibility.  Whose hostname or IP am i supposed to put in outputs.conf then ?   Pls note my requirement is not about ingesting Cloudwatch or Cloudtrail logs, for that we are all set.  All we have access to is Splunk Cloud Search head ( which is also our IDM Instance) and a couple of Heavy forwarders on premise.  As per Forwarding to Splunk cloud from AWS and on prem - Splunk Community  we can send UF logs directly to Splunk Cloud which brings me back to my original question about what exactly do i need to put in UF conf file to route it to Splunk Cloud ?  Do i need to give the Search head URL ?
I need help in sending data to two output types, [tcpout] and [httpout]. Is this possible? Because when I am using outputs.conf and pointing it to two output types, I can only see data to [httpout]... See more...
I need help in sending data to two output types, [tcpout] and [httpout]. Is this possible? Because when I am using outputs.conf and pointing it to two output types, I can only see data to [httpout] https://hecendpoint:8088 and data is not going to another indexer which is of [tcpout] indexerip:9997
we installed Splunk Forwarder in a windows 2012 R2 server. firstly we use Local System running the service. it works fine. and now I changed the service logon as our CORP account. and give the whole... See more...
we installed Splunk Forwarder in a windows 2012 R2 server. firstly we use Local System running the service. it works fine. and now I changed the service logon as our CORP account. and give the whole SplunkUniversalForwarder folder  modify access to our CORP account. then I try to start the service. but failed. the error message is below The Splunk Forwarder Service service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs
Hello All, I have data like below.  How do I extract the field names like prefix:field1, prefix:field2, prefix:field3 in tablular fashion.  Extract all those fields containing the word, "prefix:" i... See more...
Hello All, I have data like below.  How do I extract the field names like prefix:field1, prefix:field2, prefix:field3 in tablular fashion.  Extract all those fields containing the word, "prefix:" in it. "prefix:field1":"value1","prefix:field2":value2,"prefix:field3":value3, Expect result prefix:field1 prefix:field2 prefix:field3 Thank you