All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, I'm fresh out the womb with Splunk... so please bear with me.  I am attempting to install Splunk Enterprise 9.4.0 to do some Splunk training on Coursera, and come across an error. I just... See more...
Hello all, I'm fresh out the womb with Splunk... so please bear with me.  I am attempting to install Splunk Enterprise 9.4.0 to do some Splunk training on Coursera, and come across an error. I just got over not being able to find SplunkD.exe in the .bin folder by creating an exception in my antivirus. Now, I cannot seem to get past the next error (below). I searched the Splunk site and Googled it, to no avail. Please help, I have a pending job interview and want to talk somewhat intelligently about Splunk.  Any help will be greatly appreciated.  New guy! law.dennis@infosecagency.com  
Hi, We encountered a "DBX Server Error: Cannot communicate with the task server." To resolve this, I changed the Task Server port, and the error was fixed. We successfully tested this in the lower e... See more...
Hi, We encountered a "DBX Server Error: Cannot communicate with the task server." To resolve this, I changed the Task Server port, and the error was fixed. We successfully tested this in the lower environment, as it uses OpenJDK. In production, Oracle JDK (paid version) was installed during setup. To reduce costs for the client, we attempted to switch to OpenJDK, but we encountered the same "DBX Server Error: Cannot communicate with the task server." As a result, we reverted the changes. Given that the documentation states OpenJDK is not compatible and that only JDK/JRE 17 or higher is supported (tested with JDK 17.0.12), would changing the Task Server port to 1025 and switching to OpenJDK potentially cause any issues in production?
I tried to run ./splunk remove shcluster-member -mgmt_uri https://<CAPTAIN_IP>:8089 on the non-captain search head, which was successful.  But on the re-election of the new captain with this comma... See more...
I tried to run ./splunk remove shcluster-member -mgmt_uri https://<CAPTAIN_IP>:8089 on the non-captain search head, which was successful.  But on the re-election of the new captain with this command, it gave me an error. I run the command.  ./splunk add shcluster-member -mgmt_uri https://<NEW_CAPTAIN>:8089 -current_member_uri https://<PREV_CAPTAIN>:8089 WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Argument "mgmt_uri" is not supported by this handler. But now, when I run the command ./splunk show shcluster-status --verbose on the new captain, I see the previous captain is no longer in the member section.  If anyone could help, I would appreciate it. 
May we able to download the apps directly from splunkbase instead of redirect back to Broadcom website as we do not have the access permission to Broadcom web site https://splunkbase.splunk.com/app/... See more...
May we able to download the apps directly from splunkbase instead of redirect back to Broadcom website as we do not have the access permission to Broadcom web site https://splunkbase.splunk.com/app/3454 https://splunkbase.splunk.com/app/3453 Thanks 
I am want to get the list of Index and sourcetype which is not used by anyone for more than 90 days.   
Hello, I updated  Splunk App for Lookup File Editing from v3.6.0 to v4.0.5. After the update, when I check the Health > Logs or Health > Status it shows "Status (Old)".  I can see "Status (Old)" i... See more...
Hello, I updated  Splunk App for Lookup File Editing from v3.6.0 to v4.0.5. After the update, when I check the Health > Logs or Health > Status it shows "Status (Old)".  I can see "Status (Old)" in lookup_editor_status.xml file. But what does that mean? What is old? Please, advise
Linear memory growth on any splunk instance configured to receive data on splunktcpin, tcpin and udpin ports. Following config in server.conf will fix the memory growth. [prometheus] disable... See more...
Linear memory growth on any splunk instance configured to receive data on splunktcpin, tcpin and udpin ports. Following config in server.conf will fix the memory growth. [prometheus] disabled = true
Hi guys! I think this screenshot describes my problem pretty well.  I just tried to play around with chatgpt and splunk but I didnt succeed.    Does someone know what to do with this error... See more...
Hi guys! I think this screenshot describes my problem pretty well.  I just tried to play around with chatgpt and splunk but I didnt succeed.    Does someone know what to do with this error message?  Please help me out here.   Best regards
i have a field coming after a calculation  like a percentage field the request from user is to display in text format | makeresults | eval value = 36 | eval display = "the total percentage is $val... See more...
i have a field coming after a calculation  like a percentage field the request from user is to display in text format | makeresults | eval value = 36 | eval display = "the total percentage is $value$ %" | fields - value how can i display the total percentage is 36 %
Hello I am trying to drilldown on a single value panel but its not working.  looks simple but I am not sure what I am doing wrong. here is my source code         { "title": "Set Tokens on Cl... See more...
Hello I am trying to drilldown on a single value panel but its not working.  looks simple but I am not sure what I am doing wrong. here is my source code         { "title": "Set Tokens on Click - Example", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_column_chart": { "containerOptions": {}, "context": {}, "dataSources": { "primary": "ds_qBGlESX2" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "row.method.value", "token": "method" } ] }, "type": "drilldown.setToken" } ], "options": {}, "showLastUpdated": false, "showProgressBar": false, "title": "HTTP Request Method", "type": "splunk.singlevalue" }, "viz_pie_chart": { "dataSources": { "primary": "ds_c8AfQapt" }, "title": "Response Codes for Method $method$", "type": "splunk.pie" } }, "dataSources": { "ds_c8AfQapt": { "name": "Search_2", "options": { "query": "index=_internal method=$method$ | stats count by status" }, "type": "ds.search" }, "ds_qBGlESX2": { "name": "Search_1", "options": { "enableSmartSources": true, "query": "index=_internal method=GET | stats count by method" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "structure": [ { "item": "viz_column_chart", "position": { "h": 400, "w": 600, "x": 0, "y": 0 }, "type": "block" }, { "item": "viz_pie_chart", "position": { "h": 400, "w": 600, "x": 600, "y": 0 }, "type": "block" } ], "type": "grid" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }      
We recently installed Splunk Universal forwarder 9.3.2 on Windows 2019 server. After starting the forwarder I see below error in the splunkd.log. Tried uninstalling and installing the UF but still th... See more...
We recently installed Splunk Universal forwarder 9.3.2 on Windows 2019 server. After starting the forwarder I see below error in the splunkd.log. Tried uninstalling and installing the UF but still the same error. Please let me know how to fix it.   Error  :  02-25-2025 14:52:06.747 -0600 WARN TcpOutputProc [12132 parsing] - The TCP output processor has paused the data flow.  Forwarding to host_dest=(ip of indexer) inside output group splunkcloud_ from host_src=(ip folder source) has been blocked for blocked_seconds=5600. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.  
I have an errant application that is sending too much data to my Splunk Enterprise instance. This is causing licensing overage(s) & warnings. Until I can fix all the occurrences of this application... See more...
I have an errant application that is sending too much data to my Splunk Enterprise instance. This is causing licensing overage(s) & warnings. Until I can fix all the occurrences of this application, I need to configure Splunk to just drop these oversized entries. I don't want to reject/truncate all messages, just anything over say, 512k. My understanding is I can do with updates to transform.conf & props.conf? Here's my transforms.conf:   [drop_unwanted_logs] REGEX = (DEBUG|healthcheck|keepalive) # Drop logs containing these terms DEST_KEY = queue FORMAT = nullQueue [drop_large_events] REGEX = ^.{524288,} # Matches any log >= 512 KB DEST_KEY = queue FORMAT = nullQueue     Ideally, I want this to focus on two of my HEC's, so I updated props.conf:   [source::http:event collector 1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [source::http:event collector 2] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype::http:event collector 1] TRANSFORMS-null=drop_large_events TRUNCATE = 524288 [sourcetype::http:event collector 2] TRANSFORMS-null=drop_large_events TRUNCATE = 524288     Am I heading in the right direction? Or, will the following apply to all HEC's?   [sourcetype::httpevent] TRANSFORMS-null=drop_large_events TRUNCATE = 524288    
my event and inputs.conf sourcetype = rsa:syslog feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info my props.conf [rsa:syslog] TRANSFORMS-change_source... See more...
my event and inputs.conf sourcetype = rsa:syslog feb 01 10:24:12 myhostname 2025-02-01 10:24:12,999, myhostname, audit.admin.com.cd.etc info my props.conf [rsa:syslog] TRANSFORMS-change_sourcetype = change_sourcetype my transforms.conf [change_sourcetype] DESK_KEY = MetaData:Sourcetype SOURCE_KEY = MetaData:Sourcetype REGEX = \,\s+adudit\.admin FORMAT = sourcetype::new:sourcetype     could anyone help?  my sourcetype doesn't change to "new:sourcetype"   thank you
I was told that there is an app that can run the btool command on cloud instances. Does anybody know the name of this app?
Hello Team,   Can someone provide me steps to integrate AWS guardduty logs using Splunk Add-on for AWS. Please do provide me documentation if any.
Hi I have the following conf for Application events:   [WinEventLog://Application] _TCP_ROUTING = sample current_only = 0 disabled = false index = eventviewer sourcetype = applicationevents start_f... See more...
Hi I have the following conf for Application events:   [WinEventLog://Application] _TCP_ROUTING = sample current_only = 0 disabled = false index = eventviewer sourcetype = applicationevents start_from = oldest blacklist1 = EventCode="^(33)" SourceName="^Chrome$"   I have EventCode 256 events in the Application logs under Source Chrome, but I do not see those any of those events in Splunk for some reason. I don't see any errors in the splunkd.log. What could be the reason for this? I would really appreciate insight on this. 
Hi, I need to ingest some logs into splunk, so file&dirs data input its my choice. Also new index was created , _json as sourcetype. Now im trying to use spath in search to parse JSON data with mul... See more...
Hi, I need to ingest some logs into splunk, so file&dirs data input its my choice. Also new index was created , _json as sourcetype. Now im trying to use spath in search to parse JSON data with multifields and no luck yet. Just checked my json file - valid json. Here we have some parsed json, but i want to get email, first_name,last_name from properties.attributes to be able parse or filter by any of this fields in future   Appreciate any help. Small source file: https://paste2.org/OsEXkgbJ   Here is what i tried : index=ep_log event=created | spath properties.attributes index=erp_log event=created | spath properties and so on
I would like to get a count of events of all data ingested for 2024.  I have hundreds of indexes and all data over 90 days goes to DDAA.  I can use "eventcounts" for the searchable data and just mult... See more...
I would like to get a count of events of all data ingested for 2024.  I have hundreds of indexes and all data over 90 days goes to DDAA.  I can use "eventcounts" for the searchable data and just multiply by 4 for an estimate.   Using:  | eventcount summarize=false index=* | stats sum(count) as total_events by index | fieldformat total_events=tostring(total_events,"commas") | addcoltotals   Is there a way to get eventcounts for archived data?
I need help building a proper rex expression to extract the bold text from the following raw data {"bootcount":8,"device_id":"XXXX","environment":"prod_walker","event_source":"appliance","event_type... See more...
I need help building a proper rex expression to extract the bold text from the following raw data {"bootcount":8,"device_id":"XXXX","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC","local_time":"2025-02-20T00:34:58.406-06:00", "location":{"city":"XXXX","country":"XXXX","latitude":XXXX,"longitude":XXXX,"state":"XXXX"},"log_level":"info", "message":"martini::hardware_controller: Unit state update from cook client target: Elements(temp: 500°F, [D, D, D, D, D, F: 0]), hw_state: Elements(temp: 500°F, [D, D, D, D, D, F: 115])\u0000", "model_number":"XXXX","sequence":372246,"serial":"XXXX","software_version":"2.3.0.276","ticks":0,"timestamp":1740033298,"timestamp_ms":1740033298406}  I have tried; rex field=message "(?=[^h]*(?:hw_state:|h.*hw_state:))^(?:[^\(\n]*\(){2}\w+:\s+(?P<set_temp>\d+) rex field=message ".*hw_state: Elements\(temp:(?<set_temp>\d+),.*"|  with no results yielded. What is the proper rex expression to extract 500 from the message field
Hello, Does anyone know when this app will become cloud compliant?