All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am using the below payload in the HTTP template for ServiceNow integration: {   "caller_id": "AppDynamics API",   "category": "Application Support",   "subcategory": "Application Genera... See more...
Hi, I am using the below payload in the HTTP template for ServiceNow integration: {   "caller_id": "AppDynamics API",   "category": "Application Support",   "subcategory": "Application Generated Alert",   "cmdb_ci": "linvm2332",   "contact_type": "Integration",   "assignment_group": "L2 APP SUPPORT",   "short_description": "AppDynamics test - phase 2",   "description": "AppDynamics test Description  ${policy.name}  ${latestEvent.application.name} ${topSeverity.NotificationSeverity}",   "severity": "2",   "impact": "2",   "u_program_job_name": "App Details",   "u_reported_by": "AppDynamics API" } But the Output is coming as below, instead of the values. Can you please help in using the right data to fetch actual AppDynamics details? "AppDynamics test Description  Sample Policy Name  ${latestEvent.application.name} ${topSeverity.NotificationSeverity}",
We have a few users scheduling searches using "all time", time frame. How can I track those knowledge objets and delete / Pause them?
Hi  We have Victoria splunk cloud for our splunk environment and and AWS cloud  for our linux environment. we have deployed splunk using splunk cloud and like to ingest the inspector logs in to s... See more...
Hi  We have Victoria splunk cloud for our splunk environment and and AWS cloud  for our linux environment. we have deployed splunk using splunk cloud and like to ingest the inspector logs in to splunk. if any one can share the tips be appreciated. thanks Yogesh Raj Swaitchfly
I am unable to connect to the SPLUNK server for my 60-day trial.  I was able to connect yesterday but today I am unable to connect and am getting the following message:  "Hmmm... can't reach this pag... See more...
I am unable to connect to the SPLUNK server for my 60-day trial.  I was able to connect yesterday but today I am unable to connect and am getting the following message:  "Hmmm... can't reach this page.  localhost refused to connect" when  I entered:  http://localhost:8000/en-US/account/login?return_to=%2Fen-US%2F   I called Norton and they stated that the firewall is not denying access but the problem is on the SPLUNK server end.  Can anyone offer any suggestions?
Hey all, I've got a multisearch query using inputlookups to untangle a sprawling kafka setup, getting all the various latencies along source to destination and evaluating them, and grouping the resul... See more...
Hey all, I've got a multisearch query using inputlookups to untangle a sprawling kafka setup, getting all the various latencies along source to destination and evaluating them, and grouping the results per application for an overall time eg. AppA avg latency is 1.09sec, AppX avg latency is 0.9secs The simplified output of the main query looks like this (only with 8 or so other columns with the times that get summed) for a 3hour window.     Application _time total_avg total_max AppA 28/6/2023 0:00 0.05 0.09 AppA 28/6/2023 1:00 0.05 0.1 AppA 28/6/2023 2:00 0.05 0.08 AppB 28/6/2023 0:00 0.05 0.09 AppB 28/6/2023 1:00 0.22 2.72 AppB 28/6/2023 2:00 0.05 0.09 AppC 28/6/2023 0:00 0.06 0.1 AppC 28/6/2023 1:00 0.05 0.09 AppC 28/6/2023 2:00 0.05 0.09 AppX 28/6/2023 0:00 0.05 0.09 AppX 28/6/2023 1:00 0.04 0.09 AppX 28/6/2023 2:00 0.04 0.09     I'm trying to extend this query, against another lookup for the SLA threshold for each app and with the above output calculating SLA% for a dashboard to track the SLA across all Applications. Pretty basic, simply was the Apps latency below its specific SLA threshold and thus "OK", or was it over and in "BREACH" per span (hourly defult)... and of course whats the resulting SLA % per day/week/month for each App. Using the following query below on the above output:     | makecontinuous _time span=60m | filldown Application | fillnull value="-1" | lookup SLA.csv Application AS Application OUTPUT SLA_threshold | eval spans = if(isnull(spans),1,spans) | fields _time Application spans SLA_threshold total_avg total_max | eval SLA_status = if(total_avg > SLA_threshold, "BREACH", "OK") | eval SLA_nodata = if(total_avg < 0, "NODATA", "OK") | eval BREACH = case(SLA_status == "BREACH", 1) | eval OK = case(SLA_status == "OK", 1) | eval NODATA = case(SLA_nodata == "NODATA", 1) | stats sum(spans) as TotalSpans, count(OK) as OK, count(BREACH) as BREACH, count(NODATA) as NODATA, by Application | eval SLA=OK/(TotalSpans)*100     Which is mostly working okay, returning results for a dashboard like:     Application TotalSpans SLA_threshold OK BREACH NODATA SLA % AppA 24 1.5 24 0 1 100 … AppX 24 1 23 0 1 100     ...Unfortunately, theres a central problem I need to take into account, being that sometimes the apps don't have any data for their latency calculations so they end up null, which means a missing bucket/span and this is throwing off results for SLA eval. For sake of space, lets say over a 3hour period AppA is normal with 3x 1h span buckets of latency results output -- the SLA % eval will work fine. But say for whatever reason App X has missing results for bucket 01h, looking like this:     Application _time total_avg total_max AppA 28/6/2023 0:00 0.17 2.72 AppA 28/6/2023 1:00 0.04 0.09 AppA 28/6/2023 2:00 0.05 0.1 AppX 28/6/2023 0:00 0.04 1.09 AppX 28/6/2023 2:00 0.04 1.09      The SLA% eval will be off for AppX, being calculated over one less span.  Ideally, I need to fillin those empty buckets with something not only to correctly count spans per App, and not effect the SLA % calculation, but also to flag the missing data spans somehow. Being able to distinguish between an SLA Breach for data above threshold and a Breach for say "NODATA" so at least I have the option to choose how i treat those or have a secondary SLA... As above, my current approach to this as above, has been to use makecontinuous _time span=60m and fillnull value="-1" The -1 results can hit an eval for "NODATA" and be taken into account separately to buckets which "BREACH" their SLA latency threshold. eg.       Application _time total_avg total_max AppX 28/6/2023 0:00 0.04 1.09 AppX 28/6/2023 1:00 -1 -1 AppX 28/6/2023 2:00 0.04 1.09     Now the eval case logic for diff SLA and data conditions is not optimal or even right (a way to eval things as NODATA and class them as OK would be good).... Eitherway, as mentioned this approach is working okay with the above output when its a single specific App being queried, but once I search Application="*" -- the approach with "makecontinuous _time span=60" and the eval spans & other case logic no longer works as desired, because the _time buckets exist for the other Applications that have all their results data, so makecontinuous doesn't add any missing buckets or fillin "-1" for the Apps that don't.. I've also tried timechart, which will fill things in for all Apps, but then I'm faced with another problem because Applications is a non-numeric field, it adds gazillion columns eg. "total_avg: AppA" ... "total_avg: AppX" etc, theres a dozen other numeric results columns. I'd prefer things more simply output Application specific Any suggestions for a tweak or alternate way to makecontinuous _time work on a per Application basis or a way to simplify or pivot off of the timechart output?
Hello, Does SPLUNK UF/HF is compatible with Ubuntu 20.x of later version? Any recommendation would be highly appreciated. Thank you!
Is it possible in the Splunk Add-on for AWS to monitor multiple different directories in an S3 bucket and assign them different source types?  s3bucket/something > sourcetype 1 s3bucket/somethingel... See more...
Is it possible in the Splunk Add-on for AWS to monitor multiple different directories in an S3 bucket and assign them different source types?  s3bucket/something > sourcetype 1 s3bucket/somethingelse > sourcetype 2 I would think this is possible through the inputs while specifying the Bucket Name s3bucket/something and using the more settings and specifying the source type there. Can anyone confirm this actually works?
Hello, So am trying to get a report of how many times in a month a certain process runs  on a machine and when the last it did index=wss_desktop_perfmon sourcetype="wks:Perf_Process" instance!="_... See more...
Hello, So am trying to get a report of how many times in a month a certain process runs  on a machine and when the last it did index=wss_desktop_perfmon sourcetype="wks:Perf_Process" instance!="_Total" instance!="idle"  | where instance like "%bplus.wtk%" So this is the start of the search. The process bplus.wtk in splunk can have multiple instances like  bplus.wtk2#1 bplus.wtk2#2 bplus.wtk2#3 I do not care about the info past bplus.wtk I just want a count of how many times that shows up in month on a machine and count it On that machine I want a report that looks like  Computer  bplus.wtk  Last Time it Ran workstation1 100 6/23/2023 workstation2  250 6/27/2023   I have tried Stats count and it is not working because I think it is a string value I am looking at and not a number value.  Any help is appreciated
Trying to configure Splunk add on for AWS and configure  it, but when creating an input, my AWS account doesn't show it. How can I fix this?
Hello, I integrated enterprise features of the Splunk and would like to test them out on cloud platform as well, mainly for compatibility of the integration.  I created developer account and would ... See more...
Hello, I integrated enterprise features of the Splunk and would like to test them out on cloud platform as well, mainly for compatibility of the integration.  I created developer account and would like to request for cloud deployment, but after getting at edition.dev.splunk.com I can only see following text: > Splunk® Cloud Developer Edition is not generally available right now.  Splunk® Cloud Developer Edition is currently a limited-release product. Check back soon for updates. What does this mean? Are there any other ways to request for cloud deployment?
I have following set up in place and I am sending events to splunk cloud from K8S cluster. I am using HF for data manipulation.  K8S cluster  --> Heavy Forwarder --> Splunk Cloud I received all e... See more...
I have following set up in place and I am sending events to splunk cloud from K8S cluster. I am using HF for data manipulation.  K8S cluster  --> Heavy Forwarder --> Splunk Cloud I received all events send by k8s cluster but not all field from events are getting extracted in json.  Current Output { [-] action: modify containerid: 278e7bddd8b50ad885077 count: 1 host: example.com pid: 125 time: 1456789023 timestamp: 14567890234356 metrics: {"metrics":{\"name1\":{\"m1\":\"downsample\",\"m2\":\"sum\"},\"name2\":{\"Headers\":{\"Selector\":{\"m1\":\"downsample\",\"m2\":\"sum\"}}}"} uid: 0 user: 0 } Looking for convert all data in JSON key value format as shown in expected output.  Expected Output   { [-] action: modify containerid: 278e7bddd8b50ad885077 count: 1 host: example.com pid: 125 time: 1456789023 timestamp: 14567890234356 metrics: {[-] metrics:{ [-] name1:{ [-] m1: downsample m2: sum } name2:{ [-] Headers :{ [-] Selector :{ [-] m1: downsample m2: sum } } } } } uid: 0 user: 0 }     How I need to configure Splunk Heavy Forwarder to extract  multivalued nested json ?
I need to create a search that determines if an admin users password is changed. The current search pulls the domain admins group and checks for windows event codes designating if a password is chang... See more...
I need to create a search that determines if an admin users password is changed. The current search pulls the domain admins group and checks for windows event codes designating if a password is changed. However it's telling us if an admin changes someone else's password and not if an admin's password is changed only. How do I create a search to limit the search to only admins and only if THEIR password is changed?
I have a custom streaming command which takes a long time to startup (120 seconds) as it loads a cache. I stream usually about 100'000 events to this command in one invocation. This works fine except... See more...
I have a custom streaming command which takes a long time to startup (120 seconds) as it loads a cache. I stream usually about 100'000 events to this command in one invocation. This works fine except since we changed it to use python 3. The same Splunk query which streams about 100'000 events to this custom command results in the command being called several times with junks of 10'000 events which makes it obviously very inefficient. Its a distributed environment with single search head and a 100 node dual site indexer cluster. The search head find normally about 2'000'000 events which are distributed to 20 indexers, resulting in this 100'000 events per call on the indexer. Any idea why this junks are spread over several calls to the custom command?
Hello everyone, We recently encountered an issue where certain lookup tables were accidentally deleted in our Splunk ES instances. We are currently looking for ways to identify the deleted lookup ta... See more...
Hello everyone, We recently encountered an issue where certain lookup tables were accidentally deleted in our Splunk ES instances. We are currently looking for ways to identify the deleted lookup tables
Dears, After the forward of the logs from FortiGate toward SPLUNK we noticed that the license is being consumed recently. Can you help us with trimming and additional advice ? Thank You
Minimum capabilities for Splunk mpreview for Splunk users with non-admin like default user, power role and user? Hi Everyone. I'm seeking some wisdom for minimum Splunk capabilities needed for "Sp... See more...
Minimum capabilities for Splunk mpreview for Splunk users with non-admin like default user, power role and user? Hi Everyone. I'm seeking some wisdom for minimum Splunk capabilities needed for "Splunk mpreview for Splunk users  with non-admin (with default user, power, power users) roles". created role like metrics_role with the capabilities https://docs.splunk.com/Documentation/Splunk/9.0.5/Security/Rolesandcapabilities (run_msearch, list_metrics_catalog, run_commands_ignoring_field_filter) and selected all the metrics indexes under the metrics tab and left the srchFilter/restrictions blank. Neither |mpreview index=awss3_metrics|head 9 nor |mcatalog values(metric_name) where index=awss3_metrics return any metric event. So far plan to promote usage of Splunk's metrics index hit a glitch. Would appreciate inputs at to what capabilities would be needed? Thanks in advance for your time. Bests.
Is that possible to push data from on prem to AWS S3 using Splunk Forwarder ? Any other alternative options ?
Hi, We are in the process of migrating our indexes/alerts/reports/dashboards from us-east1 to ca-central1 and I would like to know if there's a way to port all the alerts/reports/dashboards without... See more...
Hi, We are in the process of migrating our indexes/alerts/reports/dashboards from us-east1 to ca-central1 and I would like to know if there's a way to port all the alerts/reports/dashboards without redoing them manually. Thank you,
I tried to install unprivillaged phantom soar on centos 7 but I receive same mistake every time. Can somebody help please. The eror:    Initializing Splunk SOAR settings Failed Splunk SOAR initi... See more...
I tried to install unprivillaged phantom soar on centos 7 but I receive same mistake every time. Can somebody help please. The eror:    Initializing Splunk SOAR settings Failed Splunk SOAR initialization Traceback (most recent call last): File "/home/phantom/soar/splunk-soar/install/console.py", line 207, in run proc = subprocess.run(normalized_cmd, **cmd_args) # noqa: PHANTOM112 File "/home/phantom/soar/splunk-soar/usr/python39/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/home/phantom/soar/bin/phenv', 'python', '/home/phantom/soar/bin/initialize.py', '--first-initialize']' returned non-zero exit status 2. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/phantom/soar/splunk-soar/./soar-install", line 72, in main deployment.run() File "/home/phantom/soar/splunk-soar/install/deployments/deployment.py", line 132, in run self.run_deploy() File "/home/phantom/soar/splunk-soar/usr/python39/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/home/phantom/soar/splunk-soar/install/deployments/deployment.py", line 193, in run_deploy operation.run() File "/home/phantom/soar/splunk-soar/install/operations/deployment_operation.py", line 135, in run self.install() File "/home/phantom/soar/splunk-soar/install/operations/tasks/initialize_phantom.py", line 62, in install self.initialize_py("--first-initialize") File "/home/phantom/soar/splunk-soar/install/operations/tasks/initialize_phantom.py", line 33, in initialize_py return self.shell.phenv(cmd, **kwargs) File "/home/phantom/soar/splunk-soar/install/console.py", line 275, in phenv return self.run([phenv] + cmd, **kwargs) File "/home/phantom/soar/splunk-soar/install/console.py", line 224, in run raise InstallError( install.install_common.InstallError: Failed Splunk SOAR initialization install failed.
Hello, We have an account on splunkbase which we have published our apps through and have tried to reset the password on the account but do not receive an email. We do not know how to progress thi... See more...
Hello, We have an account on splunkbase which we have published our apps through and have tried to reset the password on the account but do not receive an email. We do not know how to progress this, our partnership account support just closed our request. Thanks David