All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi i initially created a index name with XYZ and there are around 60 reports alerts and 15 dashboard created on this index now the index name has to be changed with XYZ_audit and i have to update all... See more...
Hi i initially created a index name with XYZ and there are around 60 reports alerts and 15 dashboard created on this index now the index name has to be changed with XYZ_audit and i have to update all these reports with neaw name of the index can i do this automatically using a script or any other way 
Ah okay, then I would execute it in following order: 1. Do the fresh installation 2. copy all custom apps and their configuration files to %SplunkHome/etc/apps/ from your source 3. start splunk
I am a beginner with Splunk. I am setting up Splunk Enterprise in a three-tier architecture with a Search Head server, an Indexer server, and a Heavy Forwarder server. I want to install the Splunk A... See more...
I am a beginner with Splunk. I am setting up Splunk Enterprise in a three-tier architecture with a Search Head server, an Indexer server, and a Heavy Forwarder server. I want to install the Splunk Add-on for Microsoft Cloud Services on the Heavy Forwarder server to ingest data from Azure Event Hubs. However, when I check the logs of the installed add-on, I see the following error: (splunk_ta_microsoft-cloudservices_azure_audit.log) splunk_ta_microsoft-cloudservices_azure_audit.log 2024-12-13 02:44:48,835 +0000 log_level=ERROR, pid=33699, tid=MainThread, file=rest.py, func_name=splunkd_request, code_line_no=67 | Failed to send rest request=https://127.0.0.1:8089/services/server/info, errcode=unknown, reason=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/util/connection.py", line 95, in create_connection raise err File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 723, in urlopen chunked=chunked, File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 404, in _make_request self._validate_conn(conn) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connectionpool.py", line 1061, in _validate_conn conn.connect() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f48c2a95e50>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: ~~~ Concern Point #1 It seems that the error has been resolved by adding the following line to /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/local/web.conf  (just changing the request destination from <local of the Heavy Forwarder server>​ to <IP address of the Search Head server> ) [settings] mgmtHostPort = <IP address of the Search Head server>:8089 However, I am now seeing the following log, and a 401 is being returned. The request destination is https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 Concern Point #2 I thought I could resolve Concern Point #1 in the same way by changing the request destination to the <IP address of the Search Head server> , but I don't know how to do that (I'm unsure if this approach is correct, so I would appreciate your guidance). splunk_ta_microsoft-cloudservices_azure_audit.log 2024-12-13 10:41:22,011 +0000 log_level=ERROR, pid=194872, tid=MainThread, file=config.py, func_name=log, code_line_no=66 | UCC Config Module: Fail to load endpoint "global_settings" - Unspecified internal server error. reason={"messages":[{"type":"ERROR","text":"Unexpected error \"<class 'splunktaucclib.rest_handler.error.RestError'>\" from python handler: \"REST Error [401]: Unauthorized -- call not properly authenticated\". See splunkd.log/python.log for more details."}]} File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_audit.py", line 21, in <module> schema_para_list=("description",), File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_mod_input.py", line 232, in main log_suffix=log_suffix, File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_mod_input.py", line 130, in run tconfig = tc.create_ta_config(settings, config_cls or tc.TaConfig, log_suffix) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 228, in create_ta_config return config_cls(meta_config, settings, stanza_name, log_suffix) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 53, in __init__ self._load_task_configs() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_config.py", line 75, in _load_task_configs config_handler = th.ConfigSchemaHandler(self._meta_config, self._client_schema) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_helper.py", line 95, in __init__ self._load_conf_contents() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/data_collection/ta_helper.py", line 120, in _load_conf_contents self._all_conf_contents = self._config.load() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/config.py", line 143, in load log(msg, level=logging.ERROR, need_tb=True) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/config.py", line 64, in log stack = "".join(traceback.format_stack()) NoneType: None ~~~ Supplementary Information The results of `curl` commands on the Heavy Forwarder server are as follows: curl -k https://<IP address of the Search Head server>:8089/services/server/info → 200 curl -k https://<IP address of the Indexer server>:8089/services/server/info → 200 curl -k https://127.0.0.1:8089/services/server/info → 401 (Unauthorized) curl -k https://<IP address of the Search Head server>/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 401 (Unauthorized) curl -k https://<IP address of the Indexer server>:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 → 401 (Unauthorized) curl -k https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/splunk_ta_mscs_settings?count=-1 → 401 (Unauthorized) If you need any further adjustments or specific aspects to focus on, please let me know!
Don’t have a deployment server hence copying the home folder across.
Hello guys. Hope someone can help us out. I am using the Enterprise and am trying to store the events after CIM mapping (via Data Model) to the S3 bucket but it doesn't seem to be possible to be co... See more...
Hello guys. Hope someone can help us out. I am using the Enterprise and am trying to store the events after CIM mapping (via Data Model) to the S3 bucket but it doesn't seem to be possible to be configured on the Splunk side. My current approach is that I have created the Scheduled Search with Report and stream the results to the summary_index. Also I created Ingest Action to stream all incoming events from summary_index to the S3. The workflow works fine just expect we get the same raw events written to the S3 and what we want - is to have MAPPED events stored to S3. Do you know if/how we can stream mapped events from one index into another?   Some more details: The reason behind it is that raw event has nested collections that we would like to reconfigure before giving back to the user. That's why initial thought was to implement logic that: 1 Our_Service sends data to Splunk 2 Splunk performs needed mapping and send mapped data to S3 3 Our_Service queries the bucket to get that formatted data I was also trying to reuse the same tstats search as we do for the dashboard but eventually that becomes a table and won't show the events but rather table in statistics so the summary_index stays empty in that case. Any help is highly appreciated
Hello Daniel, A very big thank you dear, I truly appreciate the time and effort you put in to resolve it. The settings worked like a charm..
What is the reason for the planned the migration? For me it sounds like more that you just wanna install a new Universal forwarder on a different server to collect the logs. Usually you have all spe... See more...
What is the reason for the planned the migration? For me it sounds like more that you just wanna install a new Universal forwarder on a different server to collect the logs. Usually you have all specific configuration like inputs.conf and outputs.conf on your deployment server and in case of setting up a new UF you only add it to a existing or new serverclass to rollout the configuration files. I would do a fresh installation on the new server, configure the local configurations (e.g. deploymentclient.conf) and then distribute all other configurations via the Deployment Server. Regarding the installation routine I recommend to take a look into the documentation Install a Windows universal forwarder - Splunk Documentation There is also a silent installation on command line described.
Okay, and I guess the data is pulled via the tenable API, right? Could you try: transforms.conf [tenable_remove_logs] REGEX = (?m)(ABCSCAN) DEST_KEY = queue FORMAT = nullQueue  If it is not wo... See more...
Okay, and I guess the data is pulled via the tenable API, right? Could you try: transforms.conf [tenable_remove_logs] REGEX = (?m)(ABCSCAN) DEST_KEY = queue FORMAT = nullQueue  If it is not working I would increase the depth_limit for testing.
Hello, Bit of a novice here. I am in the process of planning to migrate a Splunk universal forwarder from one windows server to another.   To my understanding, this is the following process I hav... See more...
Hello, Bit of a novice here. I am in the process of planning to migrate a Splunk universal forwarder from one windows server to another.   To my understanding, this is the following process I have come up with: 1. Copy the Splunk home folder from the original  forwarder to the newly commissioned server. 2. Download the same version of Splunk. 3. Run the MSI executable, agree to the terms and conditions and open customise settings and select the install location as the same location as the pre-existing configuration.   Will the installer then prompt me for any other information, as it already has the configuration? For example will it ask me the deployment server address or the indexor address, or what system account is being used, or to create a splunk local administration account.   Will I need to change the host name in any configuration files? If it is not the same as the original server.  
in Heavy Forwarder
Where you have applied these settings? On an indexer or on a Heavy Forwarder?
It also does not work for me. We had 8.2.6 UF version and upgraded to 9.1.7. We also tried with versions 9.0.9, 9.2.4 and 9.3.2. Regardless of the wec_event_format = raw_event , we still have errors... See more...
It also does not work for me. We had 8.2.6 UF version and upgraded to 9.1.7. We also tried with versions 9.0.9, 9.2.4 and 9.3.2. Regardless of the wec_event_format = raw_event , we still have errors in the log  Invalid WEC content-format:'Events', for splunk-format = rendered_eventSee the description for the 'wec_event_format' setting at $SPLUNK_HOME/etc/system/README/inputs.conf.spec for more details. And the data is not coming in.
Hi I have a tenable json logs, i wrote rex and trying to send the logs to null queue, howevene it is not going to nullqueue, Sample log is given below   { [-]  SC_address: X.xx.xx acceptRisk: f... See more...
Hi I have a tenable json logs, i wrote rex and trying to send the logs to null queue, howevene it is not going to nullqueue, Sample log is given below   { [-]  SC_address: X.xx.xx acceptRisk: false acceptRiskRuleComment: acrScore: assetExposureScore: baseScore: bid: checkType: summary cpe: custom_severity: false cve: cvssV3BaseScore: cvssV3TemporalScore: cvssV3Vector: cvssVector: description: This plugin displays, for each tested host, information about the scan itself : - The version of the plugin set. - The type of scanner (Nessus or Nessus Home). - The version of the Nessus Engine. - The port scanner(s) used. - The port range scanned. - The ping round trip time - Whether credentialed or third-party patch management checks are possible. - Whether the display of superseded patches is enabled - The date of the scan. - The duration of the scan. - The number of hosts scanned in parallel. - The number of checks done in parallel. dnsName: xxxx.xx.xx exploitAvailable: No exploitEase: exploitFrameworks: family: { [+] } firstSeen: X hasBeenMitigated: false hostUUID: hostUniqueness: repositoryID,ip,dnsName ip: x.x.x.x ips: x.x.x.x keyDrivers: lastSeen: x macAddress: netbiosName: x\x operatingSystem: Microsoft Windows Server X X X X patchPubDate: -1 pluginID: 19506 pluginInfo: 19506 (0/6) Nessus Scan Information pluginModDate: X pluginName: Nessus Scan Information pluginPubDate: xx pluginText: <plugin_output>Information about this scan : Nessus version : 10.8.3 Nessus build : 20010 Plugin feed version : XX Scanner edition used : X Scanner OS : X Scanner distribution : X-X-X Scan type : Normal Scan name : ABCSCAN Scan policy used : x-161b-x-x-x-x/Internal Scanner 02 - Scan Policy (Windows & Linux) Scanner IP : x.x.x.x Port scanner(s) : nessus_syn_scanner Port range : 1-5 Ping RTT : 14.438 ms Thorough tests : no Experimental tests : no Scan for Unpatched Vulnerabilities : no Plugin debugging enabled : no Paranoia level : 1 Report verbosity : 1 Safe checks : yes Optimize the test : yes Credentialed checks : no Patch management checks : None Display superseded patches : no (supersedence plugin did not launch) CGI scanning : disabled Web application tests : disabled Max hosts : 30 Max checks : 5 Recv timeout : 5 Backports : None Allow post-scan editing : Yes Nessus Plugin Signature Checking : Enabled Audit File Signature Checking : Disabled Scan Start Date : x/x/x x Scan duration : X sec Scan for malware : no </plugin_output> plugin_id: xx port: 0 protocol: TCP recastRisk: false recastRiskRuleComment: repository: { [+] } riskFactor: None sc_uniqueness: x_x.x.x.x_xxxx.xx.xx seeAlso: seolDate: -1 severity: informational severity_description: Informative severity_id: 0 solution: state: open stigSeverity: synopsis: This plugin displays information about the Nessus scan. temporalScore: uniqueness: repositoryID,ip,dnsName uuid: x-x-x-xx-xxx vendor_severity: Info version: 1.127 vprContext: [] vprScore: vulnPubDate: -1 vulnUUID: vulnUniqueness: repositoryID,ip,port,protocol,pluginID xref: }   in props.conf [tenable:sc:vuln] TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs transforms.conf [tenable_remove_logs] SOURCE_KEY = _raw REGEX = ABCSCAN DEST_KEY = queue FORMAT = nullQueue It is not working. Any solution ?. i have removed  SOURCE_KEY later , that is also not working    
Have you tried to map the "Name" to the "role" variable?  Have you checked the supported group information formats in the docs and verified it? Configure SAML SSO using configuration files on Splun... See more...
Have you tried to map the "Name" to the "role" variable?  Have you checked the supported group information formats in the docs and verified it? Configure SAML SSO using configuration files on Splunk Enterprise - Splunk Documentation
Please share the screenshots and the search that you use to fill the summary index.
Hi @YuliyaVassilyev , in Community, there are many solutions to your request, see at https://community.splunk.com/t5/Splunk-Search/How-do-I-combine-subtotals-and-totals-in-a-search-query/m-p/391298... See more...
Hi @YuliyaVassilyev , in Community, there are many solutions to your request, see at https://community.splunk.com/t5/Splunk-Search/How-do-I-combine-subtotals-and-totals-in-a-search-query/m-p/391298 https://community.splunk.com/t5/Splunk-Search/How-do-I-edit-my-search-to-get-both-subtotals-and-the-grand/m-p/240503 https://community.splunk.com/t5/Splunk-Search/Show-subtotals-in-results-table/m-p/102875 https://community.splunk.com/t5/Splunk-Search/How-to-add-sub-totals-to-a-table/m-p/317028 Test them. Ciao. Giuseppe P.S.: Karma Points are appreciated by all the contributors  
Hi Team, To reduce the time taken to load my Splunk dashboard, I created a new summary index to collect the events which retrieves the past 30 days of events and scheduled the same report to run e... See more...
Hi Team, To reduce the time taken to load my Splunk dashboard, I created a new summary index to collect the events which retrieves the past 30 days of events and scheduled the same report to run every hour and "enabled the summary indexing" as per the documentation: Create a summary index in Splunk Web - Splunk Documentation However, while checking the index, I could see that the data ingestion is not taking place as per the scheduled report. Please find the attached screenshots for additional reference. Looking forward for a workaround or a solution.    
Hi normally Splunk’s way of working is different than what you have in procedural languages. It helps us to help you if we can see your real use case and sample events. If you really need this kind... See more...
Hi normally Splunk’s way of working is different than what you have in procedural languages. It helps us to help you if we can see your real use case and sample events. If you really need this kind of functionality then you could look command map to achieve this. Anyway it has some restrictions which could make some additional challenges to you. r. Ismo
Hi @matthewroberson  ES 8.X is not yet available to download for on prem yet. Some cloud customers are in ES 8.0 Hopefully in next couple of weeks it will be available for on prem to downloa... See more...
Hi @matthewroberson  ES 8.X is not yet available to download for on prem yet. Some cloud customers are in ES 8.0 Hopefully in next couple of weeks it will be available for on prem to download from splunkbase  
What is the correlation to join the two datasets together, i.e. in the second index where you want field4, how does it know which event in the second data correlates with which event in the first ind... See more...
What is the correlation to join the two datasets together, i.e. in the second index where you want field4, how does it know which event in the second data correlates with which event in the first index. Generally the solution is to search both datasets and then combine the two with some common correlation element using stats. Can you be a bit more specific and give a more detailed example.