All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Hello to the community, I try to query Splunk from an external SDK for which I am asking from our admins for a token authentication, but I am told that Splunk does not enable coexistence of both S... See more...
  Hello to the community, I try to query Splunk from an external SDK for which I am asking from our admins for a token authentication, but I am told that Splunk does not enable coexistence of both SSO (which is used now) and token-based authentication. A quick query to ChatGPT shows that this may be possible, but I'd like to have it confirmed. Could anyone confirm using/administering such a deployment?   B.r.   Lukas  
Great question! Let me clarify how tag enrichment works when ingesting AWS logs via Splunk's Data Manager: 1. CloudWatch Log Group Tags: When you ingest logs via Data Manager from CloudWatch Log Gro... See more...
Great question! Let me clarify how tag enrichment works when ingesting AWS logs via Splunk's Data Manager: 1. CloudWatch Log Group Tags: When you ingest logs via Data Manager from CloudWatch Log Groups, the AWS resource tags (attached directly to the log group) are not automatically appended to your log events in Splunk. Currently, Data Manager doesn't provide built-in functionality to automatically propagate AWS resource tags into the log events. Potential solution: If you need custom tags (env=, service=, custom=) in your log events ingested from CloudWatch, you'll need to enrich the logs within Splunk after ingestion. This could work: Implement tags within the logs themselves directly at the application logging layer (Lambda function code or ECS task logging output).  For Lambda logs, AWS CloudWatch does not automatically propagate resource tags directly into log events ingested by Data Manager. Similar to ECS, you'll need either: To add these tags within your Lambda function's logging statements explicitly. enrich them post-ingestion in Splunk using lookups or calculated fields  
hi @livehybrid getting some more errors 00 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mimecast-for-splunk/bin/mime... See more...
hi @livehybrid getting some more errors 00 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_ttp_attachment_protect.py" timeout=30.0 04-16-2025 04:07:19.696 -0400 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_ttp_attachment_protect.py" File "/app/splunk/etc/apps/TA-mimecast-for-splunk/bin/ta_mimecast_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 478, in send_http_request 04-16-2025 04:07:19.696 -0400 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_ttp_attachment_protect.py" proxy_uri=self._get_proxy_uri() if use_proxy else None) 0
we are getting some more error would you please help me on that.   00 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mi... See more...
we are getting some more error would you please help me on that.   00 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_ttp_attachment_protect.py" timeout=30.0 04-16-2025 04:07:19.696 -0400 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_ttp_attachment_protect.py" File "/app/splunk/etc/apps/TA-mimecast-for-splunk/bin/ta_mimecast_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 478, in send_http_request 04-16-2025 04:07:19.696 -0400 ERROR ExecProcessor [2322799 ExecProcessorSchedulerThread] - message from "/app/splunk/bin/python3.7 /app/splunk/etc/apps/TA-mimecast-for-splunk/bin/mimecast_ttp_attachment_protect.py" proxy_uri=self._get_proxy_uri() if use_proxy else None) 0 
Why are you trying to do this at index time? timestamps can be better manipulated/compared when they are epochs, they only "need" to be converted to strings when being displayed in reports and dashbo... See more...
Why are you trying to do this at index time? timestamps can be better manipulated/compared when they are epochs, they only "need" to be converted to strings when being displayed in reports and dashboards.
Hi @livehybrid ,  I wanted this while indexing data. I don't see the value of the timestamp is overriden with the actual value it has(epoch), Addition to it, i see the value none returning in the ti... See more...
Hi @livehybrid ,  I wanted this while indexing data. I don't see the value of the timestamp is overriden with the actual value it has(epoch), Addition to it, i see the value none returning in the timestamp values. I wanted the event to be shown something like this in the splunk results. raw_event: before indexing. {"level":"warn","service":"resource-sweeper","timestamp":1744382735963,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744390525975,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744390538019,"message":"2 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744390555970,"message":"1 nodes are not allocated"} I wanted the events to be shown in splunk this way: {"level":"warn","service":"resource-sweeper","timestamp":1744382735963,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744390525975,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744390538019,"message":"2 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":1744390555970,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp”:04/16/2025 16:55:23.650,”message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":04/16/2025 16:55:25.975,"message":"1 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":04/16/2025 16:55:38.019,"message":"2 nodes are not allocated"} {"level":"warn","service":"resource-sweeper","timestamp":04/16/2025 16:55:55.970,”message":"1 nodes are not allocated"} The values of the timestamp should be the above one's.    
Hi @JoaoGuiNovaes  Based on the Enterprise Securtity Content Updater repo (https://github.com/splunk/security_content/blob/develop/baselines/create_a_list_of_approved_aws_service_accounts.yml) it lo... See more...
Hi @JoaoGuiNovaes  Based on the Enterprise Securtity Content Updater repo (https://github.com/splunk/security_content/blob/develop/baselines/create_a_list_of_approved_aws_service_accounts.yml) it looks like the following can be used to create the aws_service_accounts lookup: `cloudtrail` errorCode=success | rename userName as identity | search NOT [inputlookup identity_lookup_expanded | fields identity] | stats count by identity | table identity | outputlookup aws_service_accounts | stats count You must install the AWS App for Splunk (version 5.1.0 or later) and Splunk Add-on for AWS (version 4.4.0 or later), then configure your CloudTrail inputs. Please validate the service account entires in `aws_service_accounts.csv`,which is a lookup file created as a result of running this support search. Please remove the entries of service accounts that are not legitimate.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I checked that can't find anything concrete and also on the server when splunk db connect is at. eg. dbx.log etc
@livehybrid : the reason for asking 2nd and 3 rd question is in case of 2nd i keep hearing that there needs to be 3 SH in a site in a cluster otherwise captain selection will be difficult , however i... See more...
@livehybrid : the reason for asking 2nd and 3 rd question is in case of 2nd i keep hearing that there needs to be 3 SH in a site in a cluster otherwise captain selection will be difficult , however i can see this architecture as well (https://www.splunk.com/en_us/pdfs/white-paper/splunk-validated-architectures.pdf) ,so is my config correct in that case . for the 3rd question i have mentioned site as site0 indicating to disable site affinity, is that not correct should i still mention which site it is .  
Hi @Kenny_splunk  Unfortunately this is not something which is possible.  I have seen some attempts at this previously, however it is very easy to miss things, as specific fields are not always ref... See more...
Hi @Kenny_splunk  Unfortunately this is not something which is possible.  I have seen some attempts at this previously, however it is very easy to miss things, as specific fields are not always referenced but could be used, such as the following examples: A _raw event could be presented in a dashboard in a scenario - viewer may use this to determine something. A raw event may be emailed as an alert to a user to take action on something based on something inside the event. Use of wildcards such as | table my_* or stats values(*) as *  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Experts, In Splunk ITSI, we’re able to see the alerts in the Alerts table, but those alerts are not being reflected on the Glass Tables. Has anyone experienced this issue or can suggest what m... See more...
Hello Experts, In Splunk ITSI, we’re able to see the alerts in the Alerts table, but those alerts are not being reflected on the Glass Tables. Has anyone experienced this issue or can suggest what might be causing it? @Ann_Treesa  Regards,  Manideep Anchoori manideep.anchoori@erasmith.com
Hi @rahulhari88  The docs on "Configure multi-cluster search for multisite indexer clusters" is also worth a read to understand how this is configured. There are also conf-file based examples as wel... See more...
Hi @rahulhari88  The docs on "Configure multi-cluster search for multisite indexer clusters" is also worth a read to understand how this is configured. There are also conf-file based examples as well as CLI examples incase you plan to commit your changes as config files in any repo/deployment system etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @blanky  I replied to your previous post about this yesterday here https://community.splunk.com/t5/Getting-Data-In/change-timestamp-for-extra-data/m-p/744204#M118235 Were you able to test this a... See more...
Hi @blanky  I replied to your previous post about this yesterday here https://community.splunk.com/t5/Getting-Data-In/change-timestamp-for-extra-data/m-p/744204#M118235 Were you able to test this approach, or is this not what you are looking for? Please could you include some sample data as examples of before/after so we can see what you are looking to achieve if the suggested solution is not appropriate? You could try something like this:   == transforms.conf == [yourSourcetype] TRANSFORM-overwriteTime = overwriteTime == props.conf == [overwriteTime] INGEST_EVAL = _time=coalesce(strptime(substr(_raw,0,25),"%Y-%m-%d %H:%M:%S"),_time)   This would try and extract the time using the format provided out of the first 25 characters of the _raw event (adjust accordingly) and if that fails it falls back on _time previously determined).  This allows you to overwrite the _time extraction for your other data. You can develop this further depending on the various events coming in if necessary. For more context on this check out Richard Morgan's fantastic props/transforms examples at https://github.com/silkyrich/ingest_eval_examples/blob/master/default/transforms.conf#L9 For time format variables see https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Commontimeformatvariables  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @rahulhari88 To have 1 copy of each search artifact on every search head in your cluster, set the -replication_factor equal to the total number of search heads in your cluster. For example, if yo... See more...
Hi @rahulhari88 To have 1 copy of each search artifact on every search head in your cluster, set the -replication_factor equal to the total number of search heads in your cluster. For example, if you have 3 search heads, use -replication_factor 3. Example: splunk init shcluster-config -auth <useername>:<password> -mgmt_uri <yourHost>:8089 -replication_port -replication_factor 3 -conf_deploy_fetch_url <url>:<port> -secret <secret> -shcluster_label <yourLabel> To bootstrap the captain, run the splunk bootstrap shcluster-captain command on any one search head (e.g., SH1). The node where you run this command will initially become the captain, but captaincy can change automatically later as this is dynamic (unless otherwise set as static). Example: /opt/splunk/bin/splunk bootstrap shcluster-captain -servers_list "https://splunk-essh01.abc.local:8089,https://splunk-essh02.abc.local:8089,https://splunk-essh03.abc.local:8089"   To connect your search head cluster to the indexer cluster, your command is almost correct but you need to specify the site number. Example: splunk edit cluster-config -mode searchhead -site site<n> -manager_uri https://yourCMAddress:8089 -replication_port 9887 -secret ""   The replication_factor determines how many copies of each search artifact exist in the cluster. Setting it to the number of search heads ensures every SH has a copy. Bootstrapping the captain on any member will make it the initial captain; captaincy may change due to elections. The edit cluster-config command with -mode searchhead is the correct way to connect your SHC to an indexer cluster.   The following docs might also be  https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/SHCdeploymentoverview https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Multisitearchitecture   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@rahulhari88  If this information meets your requirements, please proceed to accept the solution
Hi @sabollam  You can use the following to update this within the _raw event at searchtime: | eval _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3... See more...
Hi @sabollam  You can use the following to update this within the _raw event at searchtime: | eval _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3N"))   However if you want to do this at index time then you need to do the following: == props.conf == [yourSourcetype] TRANSFORM-overrideTimeStamp = overrideTimeStamp == transforms.conf == [overrideTimeStamp] INGEST_EVAL = _raw=json_set(_raw, "timestamp",strftime(json_extract(_raw,"timestamp")/1000,"%m-%d-%Y %H:%M:%S.%3N"))  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@rahulhari88  You configure the site replication factor with the site_replication_factor. site_replication_factor = origin:<n>, [site1:<n>,] [site2:<n>,] ..., total:<n> where: <n> is a positive i... See more...
@rahulhari88  You configure the site replication factor with the site_replication_factor. site_replication_factor = origin:<n>, [site1:<n>,] [site2:<n>,] ..., total:<n> where: <n> is a positive integer indicating the number of copies of a bucket. origin:<n> specifies the minimum number of copies of a bucket that will be held on the site originating the data in that bucket (that is, the site where the data first entered the cluster). When a site is originating the data, it is known as the "origin" site. site1:<n>, site2:<n>, ..., indicates the minimum number of copies that will be held at each specified site. The identifiers "site1", "site2", and so on, are the same as the site attribute values specified on the peer nodes. total:<n> specifies the total number of copies of each bucket, across all sites in the cluster. You configure the site search factor with the site_search_factor site_search_factor = origin:<n>, [site1:<n>,] [site2:<n>,] ..., total:<n> where: <n> is a positive integer indicating the number of searchable copies of a bucket. origin:<n> specifies the minimum number of searchable copies of a bucket that will be held on the site originating the data in that bucket (that is, the site where the data first entered the cluster). When a site is originating the data, it is known as the "origin" site. site1:<n>, site2:<n>, ..., indicates the minimum number of searchable copies that will be held at each specified site. The identifiers "site1", "site2", and so on, are the same as the site attribute values specified on the peer nodes. total:<n> specifies the total number of searchable copies of each bucket, across all sites in the cluster.
Hi @berrybob  Just to confirm - you did CRYPTOGRAPHY_ALLOW_OPENSSL_102=1 in your /opt/splunk/etc/splunk-launch.conf? Did you then restart Splunk? Once this is done this should allow OpenSSL 1.0.2. ... See more...
Hi @berrybob  Just to confirm - you did CRYPTOGRAPHY_ALLOW_OPENSSL_102=1 in your /opt/splunk/etc/splunk-launch.conf? Did you then restart Splunk? Once this is done this should allow OpenSSL 1.0.2.  After this do you still get the exact same error - or is it different?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@rahulhari88  Configuring the Multisite Manager Node splunk edit cluster-config -mode manager -multisite true -site site1 -available_sites site1,site2 -site_replication_factor origin:1,total:2 -sit... See more...
@rahulhari88  Configuring the Multisite Manager Node splunk edit cluster-config -mode manager -multisite true -site site1 -available_sites site1,site2 -site_replication_factor origin:1,total:2 -site_search_factor origin:1,total:2 -secret mycluster Configuring Multisite Cluster Peer Nodes Peer 1&2 splunk edit cluster-config -master_uri https://x.x.x.x:8089 -mode peer -site site1 -replication_port 9100 -secret mycluster Peer 3&4 splunk edit cluster-config -master_uri https://x.x.x.x:8089 -mode peer -site site2 -replication_port 9100 -secret mycluster Configuring a New Multisite Search Head ./splunk edit cluster-config -mode searchhead -master_uri https://x.x.x.x:8089 -site site2 -secret mycluster Assign one of the members as the captain and set a member list: ./splunk bootstrap shcluster-captain –servers_list https://SH2:8089,https://SH3:8089,https://SH4:8089