All Topics

Top

All Topics

Hi everyone,  I've revently tested the new Splunk AI feature within Splunk ITSI to define thresholds based on historic Data/KPI points. ("Test" as in I literally created very obvious dummy-data for ... See more...
Hi everyone,  I've revently tested the new Splunk AI feature within Splunk ITSI to define thresholds based on historic Data/KPI points. ("Test" as in I literally created very obvious dummy-data for the AI to process and find thresholds for. Sort of Trust test of the AI really does find usuable thresholds. ) Example:  Every 5 minutes the KPI takes the latest value which I've set to correspond with the current weekday (+ minimal variance) For example: All KPI values on Mondays are within the range of 100-110, Tuesdays 200-210, Wednesdays 300-310 and so forth.  This is a preview of the data:  Now after a successful backfill of 30 days I would have expected the AI to see that each weekday needs its own time policy and thresholds.  However the result was this:  No weekdays detected, and instead it finds time policies for every 4hours regardless of days?  By now I've tried all possible adjustments I could think of (increasing the number of data points, greater differences between data points, other algorithmn, waiting for the next in hopes it would recalibrate itself over midnight, etc.) Hardly any improments at all and the thresholds are not usuable like this as it would not be able to detect outliers on mondays (expected values 100-110, outlier would 400 but not detected as it's still within thresholds. Thus my question to the community: Does anyone have some ideas/suggestions how I could make the AI understand the simple idea of "weekly time policies" and how I could tweak it? (Aside from doing everything manually and ditching the AI-Idea as a whole)?  Does anyone have good experience with Splunk AI defining Thresholds and if so what were the use cases?
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to ... See more...
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to SOAR using an Adaptive Response Action (Send to SOAR), the event is sent successfully, but the "event_id" field does not appear in the data received in SOAR. Any assistance or guidance to resolve this issue would be greatly appreciated. Thank you
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the... See more...
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the same data from the same bucket. Both instances are on Splunk 9.2 however the Splunk AWS TA versions are different. Both are pipeline managed so have all the same config / certs. The only difference we can see if that in the aws ta input log the 'broken' input never creates the S3 Connection before fetching the s3 objects and seems to think the bucket is empty. Working input 2025-01-15 10:25:09,124 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:162 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 11:25:09+00:00" 2025-01-15 10:25:09,125 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_get_bucket:364 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="Create new S3 connection." 2025-01-15 10:25:09,130 level=INFO pid=5806 tid=Thread-6841 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=s3_key_processer.py:_do_index:148 | bucket_name="bucketname" datainput="input" last_modified="2025-01-15T04:00:41.000Z" phase="fetch_key" job_uid="8888" start_time=1736918987 key_name="bucketobject" | message="Indexed S3 files." size=819200 action="index" Broken input 2025-01-15 12:00:33,369 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:217 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 13:00:33+00:00" 2025-01-15 12:00:33,373 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_fetch_keys:378 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="88888", phase="fetch_key" | message="End of fetching S3 objects." pending_key_total=0 Unsure, where to go from here as we have tried this on multiple new machines.  Thanks Meaf
Hello, We have json data coming in Splunk and to extract that we have given | rex "(?<json>\{.*\})" | spath input=json Now my ask is I want this query to be run by default for one or more sourcet... See more...
Hello, We have json data coming in Splunk and to extract that we have given | rex "(?<json>\{.*\})" | spath input=json Now my ask is I want this query to be run by default for one or more sourcetypes, without everytime giving in search query. Do I need to do it while on boarding itself only? If yes please help me with step by step procedure. We don't have HF. We have deployment server, manager, and 3 indexers. DS will push apps to manager and from there manager will push apps to peers.
Hello Team We want to monitor our AWS OPensearch resources over Appdynamics and we had configured the AWS Opensearch cloudwatch extension, but unfortunately it is throughing the below error. "ERR... See more...
Hello Team We want to monitor our AWS OPensearch resources over Appdynamics and we had configured the AWS Opensearch cloudwatch extension, but unfortunately it is throughing the below error. "ERROR AmazonElasticsearchMonitor - Unfortunately an issue has occurred: java.lang.NullPointerException: null at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.createMetricsProcessor(AmazonElasticsearchMonitor.java:77) ~[?:?] at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.getNamespaceMetricsCollector(AmazonElasticsearchMonitor.java:45) ~[?:?] at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.getNamespaceMetricsCollector(AmazonElasticsearchMonitor.java:36) ~[?:?] at com.appdynamics.extensions.aws.SingleNamespaceCloudwatchMonitor.getStatsForUpload(SingleNamespaceCloudwatchMonitor.java:31) ~[?:?] at com.appdynamics.extensions.aws.AWSCloudwatchMonitor.doRun(AWSCloudwatchMonitor.java:102) [?:?] at com.appdynamics.extensions.AMonitorJob.run(AMonitorJob.java:50) [?:?] at com.appdynamics.extensions.ABaseMonitor.executeMonitor(ABaseMonitor.java:199) [?:?] at com.appdynamics.extensions.ABaseMonitor.execute(ABaseMonitor.java:187) [?:?] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.MonitorTaskRunner.runTask(MonitorTaskRunner.java:149) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.runTask(PeriodicTaskRunner.java:86) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.run(PeriodicTaskRunner.java:47) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) [agent-24.10.0-891.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:128) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:215) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:253) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) [agent-24.10.0-891.jar:?] at java.lang.Thread.run(Thread.java:829) [?:?]" Can someone help here. We have used the below github code base for the same. https://github.com/Appdynamics/aws-elasticsearch-monitoring-extension
Hi all, A general question that I couldn't find an answer to... If I change for the certain app class from one to another, and restart splunkd, will there be any affect on indexing? I mean will it... See more...
Hi all, A general question that I couldn't find an answer to... If I change for the certain app class from one to another, and restart splunkd, will there be any affect on indexing? I mean will it re-index the same data or a portion of it twice?  Or, since it's the same a app and same source, maybe there is no need to restart splunkd? Thanks 
Hi all, I'm in the process of migrating our single hosted Splunk installation to a new server. After setting up a new Splunk instance and feeding it data from a few devices, I notice an oddity I nev... See more...
Hi all, I'm in the process of migrating our single hosted Splunk installation to a new server. After setting up a new Splunk instance and feeding it data from a few devices, I notice an oddity I never noticed before. Logging in and getting to search & reporting all works at the expected speed. But every time I start a new search, 18 to 19 seconds are spend with a POST call to the URL (host and user obfuscated) https://hostname/en-US/splunkd/__raw/servicesNS/myusername/search/search/ast The result is always a 200, but it always takes those 18 to 19 seconds to finish. When I have the results, everything is fast: selections in the timeline, paging through results and changing the "results per page" value. It seems like the system is trying something, runs into a timeout and then proceeds with normal work, but I cannot figure out what that would be. I have not done much customizations yet, but we are in a heavily firewalled environment. Am I overlooking something here?  
What is the fastest way to migrate Splunk objects dashboard , alerts, reports from one of these old version ( 6.5, 7)  to latest cloud.  . thanks 
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My goal is to send... See more...
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My goal is to send the logs to a syslog-ng instance running with a custom config. My current problem is that the SC4S config contains a part where it checks for subseconds, and appends the value to the timestamp, if found.   [metadata_source] SOURCE_KEY = MetaData:Source REGEX = ^source::(.*)$ FORMAT = _s=$1 $0 DEST_KEY = _raw [metadata_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = ^sourcetype::(.*)$ FORMAT = _st=$1 $0 DEST_KEY = _raw [metadata_index] SOURCE_KEY = _MetaData:Index REGEX = (.*) FORMAT = _idx=$1 $0 DEST_KEY = _raw [metadata_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = _h=$1 $0 DEST_KEY = _raw [metadata_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1$0 DEST_KEY = _raw [metadata_subsecond] SOURCE_KEY = _meta REGEX = \_subsecond\:\:(\.\d+) FORMAT = $1 $0 DEST_KEY = _raw   In my case however, when it's not found, the timestamp field will not get a whitespace appended, and thus it will be practically concatenated with the following field, which is not what I want. How could set up the config so that there will always be a whitespace before the next field (the host/_h field)? I tried adding an extra whitespace in front of the _h in the FORMAT part of the metadata_host stanza, but that seems to be ignored. This is what I see:   05:58:07.270973 lo In ifindex 1 00:00:00:00:00:00 ethertype IPv4 (0x0800), length 16712: (tos 0x0, ttl 64, id 49071, offset 0, flags [DF], proto TCP (6), length 16692) 127.0.0.1.49916 > 127.0.0.1.cslistener: Flags [.], cksum 0x3f29 (incorrect -> 0x5743), seq 1:16641, ack 1, win 260, options [nop,nop,TS val 804630966 ecr 804630966], length 16640 0x0000: 0800 0000 0000 0001 0304 0006 0000 0000 ................ 0x0010: 0000 0000 4500 4134 bfaf 4000 4006 3c12 ....E.A4..@.@.<. 0x0020: 7f00 0001 7f00 0001 c2fc 2328 021a 7392 ..........#(..s. 0x0030: 486d 209f 8010 0104 3f29 0000 0101 080a Hm......?)...... 0x0040: 2ff5 b1b6 2ff5 b1b6 5f74 733d 3137 3336 /.../..._ts=1736 0x0050: 3931 3730 3739 5f68 3d73 706c 756e 6b2d 917079_h=splunk- 0x0060: 6866 205f 6964 783d 5f6d 6574 7269 6373 hf._idx=_metrics 0x0070: 205f 7374 3d73 706c 756e 6b5f 696e 7472 ._st=splunk_intr    This is the interesting part:   0x0040: 2ff5 b1b6 2ff5 b1b6 5f74 733d 3137 3336 /.../..._ts=1736 0x0050: 3931 3730 3739 5f68 3d73 706c 756e 6b2d 917079_h=splunk- 0x0060: 6866 205f 6964 783d 5f6d 6574 7269 6373 hf._idx=_metrics   The _h will come right after the end of the _ts field, without any clear separation.
I set up DoDBanner when logging in by putting the following in Web.conf. It was displayed in Splunk 9.2.1, but it was no longer displayed in Splunk 9.2.2. $SPLUNK_HOME$\etc\system\local\web.conf [... See more...
I set up DoDBanner when logging in by putting the following in Web.conf. It was displayed in Splunk 9.2.1, but it was no longer displayed in Splunk 9.2.2. $SPLUNK_HOME$\etc\system\local\web.conf [settings] login_content = <script>function DoDBanner() {alert("Hello World");}DoDBanner();</script> Is DoDBanner no longer supported from a certain version?
I'm trying to take a single node Splunk Enterprise system and expand it to a cluster with an additional search head and indexes. I copied the existing install to a new system and that worked perfect... See more...
I'm trying to take a single node Splunk Enterprise system and expand it to a cluster with an additional search head and indexes. I copied the existing install to a new system and that worked perfectly. Then I added the cluster manager and indexes and all of the settings that were in the old system that were copied to the search head were gone. I'm assuming that I put the copy of the single node into the wrong role, but I'm not sure which role I should have picked.
i want to replicate search1's kvstore to search2. that two cluster is non-clustering and standalone.   And i found option in server.conf [kvstore] stanza replication_host. Does this option only o... See more...
i want to replicate search1's kvstore to search2. that two cluster is non-clustering and standalone.   And i found option in server.conf [kvstore] stanza replication_host. Does this option only operated in clustering environment?   
This is the error I get when trying to start SOAR on the Warmstandby: Splunk SOAR is in standby. Starting all services except automation daemons Starting Database server (PostgreSQL): [ OK ] Sta... See more...
This is the error I get when trying to start SOAR on the Warmstandby: Splunk SOAR is in standby. Starting all services except automation daemons Starting Database server (PostgreSQL): [ OK ] Starting Connection pooler (PgBouncer): [ OK ] Checking database connectivity: [ OK ] Checking component versions: [ OK ] Starting Supervisord: [ OK ] Starting Splunk SOAR daemons: [ OK ] Checking Supervisord processes: [FAILED] add_to_es_index failed to start. Check /opt/soar/var/log/phantom/add-es-index-stderr.log for more info. Splunk SOAR startup failed.   I saw there is a known issue for this: https://docs.splunk.com/Documentation/SOARonprem/6.3.1/ReleaseNotes/KnownIssues 2024-12-03 PSAAS-20901 supervisord failing to start on warm standby instance   Does anyone have a workaround or fix for this issue?
Hi all, I would like to migrate our current cluster master to the a new server. Here's what I gather the process to do so. If someone can take a look and let me know if there's anything missing that... See more...
Hi all, I would like to migrate our current cluster master to the a new server. Here's what I gather the process to do so. If someone can take a look and let me know if there's anything missing that'll be much appreciated. Thank you! Additionally, should I enable cluster maintenance mode on the old cluster master prior to the migration?  ======================================================================== ======================== Migrate the Cluster Master ==================== ======================================================================== - Stop the splunk service on both the old and new cluster master /opt/splunk/bin/splunk stop - On the old Cluster Master change encrypted passwords to clear text and save theses find /opt/splunk/etc -name '*.conf' -exec grep -inH '\$[0-9]\$' {} \; /opt/splunk/bin/splunk show-decrypted --value '$encryptedpassword' - - Copy files to the new Cluster Master scp -r /opt/splunk/var/run/splunk/cluster/remote-bundle/ new_splunkmaster:/opt/splunk/var/run/splunk/cluster/remote-bundle/ scp -r /opt/splunk/etc/master-apps/ new_splunkmaster:/opt/splunk/etc/ scp -r /opt/splunk/etc/system/local/server.conf new_splunkmaster:/opt/splunk/etc/system/local/ - Make sure the above decrypted the main 2 passwords below and replace them in the copied server.conf, in clear text, on the new Cluster Master until it is restarted when it will then encrypt. [general] sslPassword= [clustering] pass4SymmKey= - Start splunk on the new Cluster Master /opt/splunk/bin/splunk start - Point indexers to the new Cluster Master /opt/splunk/bin/splunk edit cluster-config -mode peer -manager_uri https://new_splunkmaster:8089 -replication_port 9887 -secret new_splunkmaster - Point the search heads to the new Cluster Master /opt/splunk/bin/splunk edit cluster-config -mode searchhead -manager_uri https://new_splunkmaster:8089 -secret new_splunkmaster ======================================================================== ======================== Migrate the License Manager ==================== ======================================================================== - Promote a license peer to be the manager: On the peer, navigate to Settings > Licensing. Click Switch to local manager. On the Change manager association page, choose Designate this Splunk instance as the manager license server. Click Save. Restart the Splunk Enterprise services. On the new license manager, install your licenses. See Install a license. Configure the license peers to use the new license manager: - On the peer (indexer / search heads / deployer), navigate to Settings > Licensing. Click Switch to local manager. Update the Manager license server URI to point at the new license manager. Click Save. Restart the Splunk Enterprise services. Demote the old license manager to be a peer: - On the old license manager, navigate to Settings > Licensing. Click Change to peer. Click Designate a different Splunk instance as the manager license server. Update the Manager license server URI to point at the new license manager. Click Save. Stop the Splunk Enterprise services. Using the CLI, delete any license files under $SPLUNK_HOME/etc/licenses/enterprise/. Start the Splunk Enterprise services.  
I have a timechart that shows a calculated value split by hostname, Ex: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) by hostname What I a... See more...
I have a timechart that shows a calculated value split by hostname, Ex: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) by hostname What I am trying to do is also show the calculated overhead value not split by hostname: [[search]] |  | eval overhead=(totaltime - routingtime) | timechart span=1s eval(round(avg(overhead),1)) How do I show the split out overhead values and the combined overhead value in the same timechart?
This is an example of the structure of my data and the query I am currently using. I have tried around 10 different solutions based on various examples from stackoverflow.com and  community.splunk.co... See more...
This is an example of the structure of my data and the query I am currently using. I have tried around 10 different solutions based on various examples from stackoverflow.com and  community.splunk.com. But I have not figured out how to change this query such that eval Tag = "Tag1" can become an array eval Tags = ["Tag1", "Tag4"] and I will get entries for all tags that exist in the array. Could someone guide me in the right direction?   | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | eval Tag = "Tag1" | spath | foreach *ReportTags{} [| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), "<<FIELD>>", null()))] | dedup tags | stats values(tags)  
Hi team Is there a way to connect the splunk cloud platform with splunk on-prem, this to send a specific index to splunk on-prem? Since the client does not allow modifications to the universal forw... See more...
Hi team Is there a way to connect the splunk cloud platform with splunk on-prem, this to send a specific index to splunk on-prem? Since the client does not allow modifications to the universal forwarder agents.   Regards
Hello AppDynamics Community, Now that AppDynamics is part of the Splunk Observability portfolio, we’re consolidating marketing websites and moving content from www.appdynamics.com to www.splunk.com... See more...
Hello AppDynamics Community, Now that AppDynamics is part of the Splunk Observability portfolio, we’re consolidating marketing websites and moving content from www.appdynamics.com to www.splunk.com.   As of February 1, 2025, if you visit a url on www.appdyamics.com, you’ll be automatically redirected to the corresponding page on splunk.com.  Please note that this change does not affect Docs (docs.appdynamics.com), community (community.appdynamics.com), product (accounts.appdynamics.com), or the support portal.   Here are direct links to some AppDynamics pages now on splunk.com:  AppDynamics support overview (formerly appdynamics.com/support) has links support, AppDynamics login, documentation, and more.  AppDynamics joins Splunk   AppDynamics product page  AppDynamics customer stories 
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resour... See more...
Hello, Can someone please provide the eksctl command line or command line in combination with a cluster config file that will provide an EKS cluster (control plane and worker node(s)) that is resourced for installation of the splunk-operator and then experimentation with standalone Splunk Enterprise configurations? Thanks, Mark
We see the following on the server via the ss -tulpn  tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* user... See more...
We see the following on the server via the ss -tulpn  tcp LISTEN 0 128 0.0.0.0:8089 0.0.0.0:* users:(("splunkd",pid=392724,fd=4))  However, the browser at http://<Indexer>:8089 returns ERR_CONNECTION_RESET. What can it be?  while http://<Indexer>:8000 works as expected.