All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Trying to set graph colors with fieldColors in options, in Dashboard Studio. Tried to set them in both dataSources and Visualizations to no avail. What am i doing wrong? Tested on Splunk Cloud Ve... See more...
Trying to set graph colors with fieldColors in options, in Dashboard Studio. Tried to set them in both dataSources and Visualizations to no avail. What am i doing wrong? Tested on Splunk Cloud Version: 8.2.2203.2. Whole code for dashboard below {     "dataSources": {         "ds_sourcetype": {             "type": "ds.search",             "options": {                 "query": "index=_internal _sourcetype IN ( splunk_web_access, splunkd_access)\n| timechart count by _sourcetype",                 "fieldColors": {                     "splunk_web_access": "#FF0000",                     "splunkd_access": "#0000FF"                 }             },             "name": "Search_1"         }     },     "visualizations": {         "viz_sourcetype": {             "type": "splunk.line",             "options": {                 "fieldColors": {                     "splunk_web_access": "#FF0000",                     "splunkd_access": "#0000FF"                 },                 "yAxisAbbreviation": "auto",                 "y2AxisAbbreviation": "auto",                 "showRoundedY2AxisLabels": false,                 "legendTruncation": "ellipsisMiddle",                 "showY2MajorGridLines": true,                 "xAxisLabelRotation": 0,                 "xAxisTitleVisibility": "show",                 "yAxisTitleVisibility": "show",                 "y2AxisTitleVisibility": "show",                 "yAxisScale": "linear",                 "showOverlayY2Axis": false,                 "nullValueDisplay": "gaps",                 "dataValuesDisplay": "off",                 "showSplitSeries": false,                 "showIndependentYRanges": false,                 "legendMode": "standard",                 "legendDisplay": "right",                 "lineWidth": 2,                 "backgroundColor": "#ffffff"             },             "dataSources": {                 "primary": "ds_sourcetype"             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "grid",         "options": {},         "structure": [             {                 "item": "viz_sourcetype",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1200,                     "h": 400                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "title": "dashboard_studio_test",     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     } }
hello, i'm looking into how to automate splunk setup for newly spun up servers. As i'm still not the most proficient with the splunk internal configs to determine whats needed and whats not i want ... See more...
hello, i'm looking into how to automate splunk setup for newly spun up servers. As i'm still not the most proficient with the splunk internal configs to determine whats needed and whats not i want some guidance as to which config files i need to alter in order to prepare the newly spun up server to be plugged into the wider splunk deployment. Currently we have a distributed multisite setup and the idea is to have a collection of the configs needed so that we can just alter and push them to the new server given the servers task, be it Indexer, search head or any other server we potentially need. So what i'm asking for is a pointer to which config files that needs to be staged for setup. ( I assume it's mostly the ../system/*.conf files but if there are any others to keep a look out for)
Hello everyone. We are experiencing download and a few upload failures from our Indexers to SmartStore in AWS S3. Graph for the last 24 hours. I previously increased the Cache Manager limits... See more...
Hello everyone. We are experiencing download and a few upload failures from our Indexers to SmartStore in AWS S3. Graph for the last 24 hours. I previously increased the Cache Manager limits from the default of 8 to 128 with a custom server.conf: [cachemanager] max_concurrent_downloads = 128 max_concurrent_uploads = 128 An example of an upload faliure from the splunkd.log file (sourcetype=splunkd source="/opt/splunk/var/log/splunk/splunkd.log" component=CacheManager log_level=ERROR): 08-12-2022 03:37:28.565 +0000 ERROR CacheManager [950069 cachemanagerUploadExecutorWorker-0] - action=upload, cache_id="dma|<INDEX>~925~054DE1B7-4619-4FBC-B159-D4013D4C30AE|C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Network_Sessions", status=failed, reason="Unknown", elapsed_ms=12050 08-12-2022 03:37:28.484 +0000 ERROR CacheManager [950069 cachemanagerUploadExecutorWorker-0] - action=upload, cache_id="dma|<INDEX>~925~054DE1B7-4619-4FBC-B159-D4013D4C30AE|C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Network_Sessions", status=failed, unable to check if receipt exists at path=<INDEX>/dma/de/07/925~054DE1B7-4619-4FBC-B159-D4013D4C30AE/C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Network_Sessions/receipt.json(0,-1,), error="network error" An example of a download failure: 08-12-2022 09:06:44.488 +0000 ERROR CacheManager [1951184 cachemanagerDownloadExecutorWorker-113] - action=download, cache_id="dma|<INDEX>~204~431C8F6B-2313-4365-942D-09051BE286B8|C1AB9688-CBC2-428C-99F5-027FA469269D_DM_Splunk_SA_CIM_Performance", status=failed, reason="Unknown", elapsed_ms=483 We previously had an issue with NACLs in AWS where the S3 IP ranges had been updated but the NACLs were out of date. We have allowed access to all S3 IP ranges in our region. Does anyone have an idea of how I can troubleshoot this so we can reduce, or eliminate the failures? Anyone else had any experience with this?    
I have created Splunk query with time modifiers "earliest" and "latest" ( for eg. earliest="15/01/2022 8 am" latest="15/01/2022 10 pm" ) and also I have selected time range in the time ranger picker ... See more...
I have created Splunk query with time modifiers "earliest" and "latest" ( for eg. earliest="15/01/2022 8 am" latest="15/01/2022 10 pm" ) and also I have selected time range in the time ranger picker (for eg. 23/12/2022 8 am to 23/12/2022 10 pm) Splunk Query:   timeformat="%m-%d-%Y %l:%M %p" earliest="15-01-2022 08:00 AM" latest="15-01-2022 10:00 PM" index="mobileApp" homepage     Time range picker values in UI: From: 23/12/2022 8 am; To: 23/12/2022 10 pm   whenever, I click 'search' button, time range picker overrides the time modifiers earliest/latest values which are used in the Splunk query Question: could you please help me on overriding 'time range picker' values ( I need results between 15/01/2022 8 am to 15/01/2022 10 pm based on 'time modifiers' only) Your answer would be greatly appreciated!
Latest data within a time span. I have a query as below, but I would like to get the latest data for a field within span of 1w.   index=my_index | timechart span=1w estdc(host) by site   I wo... See more...
Latest data within a time span. I have a query as below, but I would like to get the latest data for a field within span of 1w.   index=my_index | timechart span=1w estdc(host) by site   I would like to get the latest data for field "encrypted=false" within the span=1w for all host by site Edit: encrypted=false changed from true Edit 2: Summary of What I am trying to get as clearly articulated by @ITWhisperer  "So my guess was right - this is what the search is basically doing For each week, it gets the latest encryption state for each host on each site Then keeps only those statistics where the state is false Then counts to events (one for each host with encryption false for that week) by week and site" Finally, it reorganises the data into chart format.
Hi, I have a log file in which I have two things functionality and different repositories which use this functionality . I want to calculate average of occurrence of this functionality over each ... See more...
Hi, I have a log file in which I have two things functionality and different repositories which use this functionality . I want to calculate average of occurrence of this functionality over each repository.  The name of the functionality is string Repo 1 A,A,A Repo 2 A,A,A Repo 3 A,A,A,A   The output should be Name of Repo Avg for functionality A 1                               0.3 2                                0.3 3                                0.4    
Hi everyone, I'm starting using Splunk SDK Python. I'm using Python 3.8 and Splunk 9.0 I get error: HTTP 404 action forbidden. I don't understand why and how to fix it. Here is my code:  im... See more...
Hi everyone, I'm starting using Splunk SDK Python. I'm using Python 3.8 and Splunk 9.0 I get error: HTTP 404 action forbidden. I don't understand why and how to fix it. Here is my code:  import splunklib.client as client import splunklib.results as results def connect_to_splunk(username, password, host='localhost', port='8089', owner='admin', app='search', sharing='user'): try: service = client.connect(host=host, port=port, username=username, password=password, owner=owner, app=app, sharing=sharing) if service: print("Connected successfully!") return service except Exception as e: print(e) def run_normal_mode_search(splunk_service, search_string, payload={}): try: job = splunk_service.jobs.create(search_string, **payload) # print(job.content) # check if the job is completed or not while True: while not job.is_ready(): pass if job["isDone"] == "1": break for result in results.ResultsReader(job.results()): print(result) except Exception as e: print(e) def main(): try: splunk_service = connect_to_splunk(username='xxx', password='xxx') search_string = "search index= haindex1 |top host" payload = {"exec_mode": "normal"} run_normal_mode_search(splunk_service, search_string, payload) except Exception as e: print(e) if _name_ == "_main_": main() Here is the result: Connected successfully! HTTP 404 Not Found -- Action forbidden. Process finished with exit code 0   Thanks and have a nice day! Julia  
I have been monitoring a few Windows hosts with Splunk Universal Forwarder installed. I have setup a deployment server on a linux host to manage configurations on these hosts. Recently, I have moved ... See more...
I have been monitoring a few Windows hosts with Splunk Universal Forwarder installed. I have setup a deployment server on a linux host to manage configurations on these hosts. Recently, I have moved one of these windows hosts to another subnet. Then I found the deployment server cannot receive any phonehome from this host. Then I checked splunkd.log and splunkd_access.log, found no log with the windows host's hostname/IP observed. However, on the Linux host I run tcpdump and found the Windows is actually sending traffic to the deployment server's port 8089. So the regular phonehome message is actually sent to the deployment server but cannot "recognize" it as phonehome message. Do you have any idea what could possibly go wrong? I have actually re-installed the universal forwarder on that host but the issue is not solved. Splunk version is v8.1
Hi-  We have *nix server (ec2 instance) in AWS.  How can we forward one of the application log files from this ec2 instance to our Splunk Cloud instance ?   I am bit confused about the approach of ... See more...
Hi-  We have *nix server (ec2 instance) in AWS.  How can we forward one of the application log files from this ec2 instance to our Splunk Cloud instance ?   I am bit confused about the approach of using Universal Forwarder. As per https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Admin/Configureinputs ;  the UF needs to point (via outputs.conf) to the indexer tier.  But the indexer tier is all managed by Splunk themselves and we don't have any visibility.  Whose hostname or IP am i supposed to put in outputs.conf then ?   Pls note my requirement is not about ingesting Cloudwatch or Cloudtrail logs, for that we are all set.  All we have access to is Splunk Cloud Search head ( which is also our IDM Instance) and a couple of Heavy forwarders on premise.  As per Forwarding to Splunk cloud from AWS and on prem - Splunk Community  we can send UF logs directly to Splunk Cloud which brings me back to my original question about what exactly do i need to put in UF conf file to route it to Splunk Cloud ?  Do i need to give the Search head URL ?
I need help in sending data to two output types, [tcpout] and [httpout]. Is this possible? Because when I am using outputs.conf and pointing it to two output types, I can only see data to [httpout]... See more...
I need help in sending data to two output types, [tcpout] and [httpout]. Is this possible? Because when I am using outputs.conf and pointing it to two output types, I can only see data to [httpout] https://hecendpoint:8088 and data is not going to another indexer which is of [tcpout] indexerip:9997
we installed Splunk Forwarder in a windows 2012 R2 server. firstly we use Local System running the service. it works fine. and now I changed the service logon as our CORP account. and give the whole... See more...
we installed Splunk Forwarder in a windows 2012 R2 server. firstly we use Local System running the service. it works fine. and now I changed the service logon as our CORP account. and give the whole SplunkUniversalForwarder folder  modify access to our CORP account. then I try to start the service. but failed. the error message is below The Splunk Forwarder Service service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs
Hello All, I have data like below.  How do I extract the field names like prefix:field1, prefix:field2, prefix:field3 in tablular fashion.  Extract all those fields containing the word, "prefix:" i... See more...
Hello All, I have data like below.  How do I extract the field names like prefix:field1, prefix:field2, prefix:field3 in tablular fashion.  Extract all those fields containing the word, "prefix:" in it. "prefix:field1":"value1","prefix:field2":value2,"prefix:field3":value3, Expect result prefix:field1 prefix:field2 prefix:field3 Thank you
Hi , Im trying to open my Dashboard Studio in polywall its open white screen ! no dashboard , but the classic its open good what's the problem ?
Hi, I am trying to use Splunk for the first time and I am not able to complete the devtutorial. I successfully created an app called "Dev Tutorial" (instructions).  Then I followed these instru... See more...
Hi, I am trying to use Splunk for the first time and I am not able to complete the devtutorial. I successfully created an app called "Dev Tutorial" (instructions).  Then I followed these instructions and setup an index called "devtutorial" (which I enabled), installed the "Eventgen" app (which appears in my app directory as "SA-Eventgen"), navigated to "Settings >> Data Inputs >> Eventgen" and enabled the "modinput_eventgen" source type, downloaded sample_bundle and changed the index to "devtutorial", I refeshed Splunk by using this link: http://localhost:8000/debug/refresh. But, when I go to my "Dev Tutorial" app and search for "index="devtutorial" ", no events show up. Also when I go to the "SA-Eventgen" app itself, I get no data:   Can I get some help with this please?
I need the output token of a text box to be the true option of a radio button. I have two text inputs Username going to $upn$ and Asset going to $asset$ (Both are * as default) The base search is... See more...
I need the output token of a text box to be the true option of a radio button. I have two text inputs Username going to $upn$ and Asset going to $asset$ (Both are * as default) The base search is index=azuread devicename=$asset$ userPincipalName=$upn$ So this work perfectly allowing filter to user and/or asset  But I want to pull in our VPN logs (with an append so that both show in the same table in time order). The trouble is that our VPN logs only record by asset and are very noisy. so need to be filtered by asset before the append.  But when asset is "*" then everything is displayed, obscuring the azure login detail. I've tried adding a radio button (with the token being $vpn_asset$).  I've set the False option as default returning "This_is_not_a_valid_asset_name" which will not match anything in the VPN logs. I want to set the true option to be $asset$ so that it uses the token from the ASSET text box, When selecting false - the search "index=VPN deviceName=$vpn$" substitutes $vpn$ with "This_is_not_a_valid_asset_name" which is correct, but when selecting true, the token $vpn$ simply gets substituted for $asset$, whereas I would expect it to be substituted with either the contents to the ASSET Text input. Any ideas? The code is something like this (poetic licence is used for simplicity)     input Title="Insert User Principal Name" type=text token=upn default=* input Title="Insert Asset Name" type=text token=asset default=* input Title="Include VPN Logs" type=radio token=vpn false="not_an_asset" true="$asset$" default=false index=azure userPrincipalName="$upn$" userDeviceName="$asset$" |append [search index=VPN deviceName="$vpn$"]     Whilst "Include VPN Logs" is set to false, the deviceName="not_an_asset" will result in zero VPN logs returned. I need this to pass through the asset detail in the asset input box when set to true, therefore the azure logon details will be interspersed with the VPN logs making assessment easier.
Hello Team,   Trying to exclude NULL fields from results to avoid gaps in table.  Currently using this query: <my base search> | fillnull value="NULL" | search NOT NULL |table uid   and th... See more...
Hello Team,   Trying to exclude NULL fields from results to avoid gaps in table.  Currently using this query: <my base search> | fillnull value="NULL" | search NOT NULL |table uid   and the results still table all the NULL spaces and only names them NULL as opposed to being blank. I want to only show the uids of the users. any suggestions how I can get past this?   Thanks!
Hi folks, I have recently been testing out how to ensure the connection between my deployment server and the universal forwarders is secure. I followed the instructions and deployed a new app with s... See more...
Hi folks, I have recently been testing out how to ensure the connection between my deployment server and the universal forwarders is secure. I followed the instructions and deployed a new app with some stanzas to a test windows workstation server class, via deploymentclient.conf to conform to this: [deployment-client] sslVerifyServerCert=true caCertFile=$SPLUNK_HOME/etc/apps/<this apps name>/auth/ca.pem sslCommonNameToCheck = <common name in DS cert> My question is how can I confirm it is connecting securely? Most of the documentation I find describes securing the indexers to forwarders, but not the deployment server to client/forwarder connection.
How to remove duplicate values in a different field |stats count by src dest  
Hi Linux Experts! Need help on a script that I'm working on to log sudo-enabled users. The script that I'm using is below   #!/bin/sh getent passwd | cut -f1 -d: | xargs -L1 sudo -l -U | grep -v '... See more...
Hi Linux Experts! Need help on a script that I'm working on to log sudo-enabled users. The script that I'm using is below   #!/bin/sh getent passwd | cut -f1 -d: | xargs -L1 sudo -l -U | grep -v 'not allowed'   It is a `.sh` file that's ran once a day. The corresponding output is then parsed and massaged by some SEDCMD stuff, not relevant here. This way, I can see which users are able to perform sudo on the machine.  Note: I am aware of the `usersWithLoginPrivs.sh` but this includes users that I'm not interested.  Hence the custom script. If there's another solution you can share, that'd be great. But here's my PROBLEM: linux admins are complaining that they're getting messaged because `splunk` user that runs this script is generating messages for them. And they don't want to get the messages. So, they suggested to append this command at the end of the script:   > /dev/null 2>&1   which I did. However, it does not print output anymore for those Splunk UFs that previously were able to.  Yes, the main solution to this problem is to give `splunk` user permission to run the script. But due to the complexity of our organization, we can't request the same thing across the board.  So, basically, of the thousands of linux servers that we have some can run this script, some cannot. That's currently okay. But to those that cannot, I'd like to modify the script in such a way that it will still work the same but will not produce any error. Is there any alternative?
I currently have splunk forwarder 9.0 installed on my windows 11 computer, sync a folder that has files from OneDrive with the following structure [batch://C:\Users\esnsanma\Documents\OneDrive - Carv... See more...
I currently have splunk forwarder 9.0 installed on my windows 11 computer, sync a folder that has files from OneDrive with the following structure [batch://C:\Users\esnsanma\Documents\OneDrive - Carvajal S.A\ReportesNdd\8001466435\*] disabled = false index = idx_ndd_group sourcetype = st_ndd_congroup crcSalt = <SOURCE> move_policy = sinkhole every time I restart my computer it stops detecting the files, for example I start my computer but it doesn't index the files, it only detects them until I copy and paste the files on the same folder, there is a command to force the indexing of the entire tree of folders