All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am looking for some guidance on licensing, please. AppD has two licensing models for Commerical SaaS: ABL and IBL 1) Is it possible to convert a customer licensing from ABL to IBL? 2) If... See more...
Hello, I am looking for some guidance on licensing, please. AppD has two licensing models for Commerical SaaS: ABL and IBL 1) Is it possible to convert a customer licensing from ABL to IBL? 2) If this is possible, would the controller(s) need to be re-configured? And would this mean effectively starting over in terms of application mapping, health alerts, dashboards, and non-out-of-the-box instrumentation? The License Entitlements and Restrictions page does not cover this: License Entitlements and Restrictions (appdynamics.com) 3) Both ABL and IBL licensing models are orderable via Cisco Commerce. However, it appears that for Cisco Enterprise Agreements 3.0, only IBL licensing (Enterprise and Premium tiers) is covered, and not ABL (Peak, Pro, Advanced). Does this mean that you can only move a customer to an EA if they are licensed for IBL? Appreciate your input. Thanks
hai team, we are using splunk cloud and one prem HF  we are getting juniper logs as syslogs and we are using Splunk_TA_juniper in splunk cloud how to do field attraction from my end 
 I have installed Splunk forwarder on Windows  server. I would like to configure an alert so everytime the disk is getting full an email would be sent to my email address. The same if the server was ... See more...
 I have installed Splunk forwarder on Windows  server. I would like to configure an alert so everytime the disk is getting full an email would be sent to my email address. The same if the server was turned off or not detected in the network. Could you plase help me? i check the forum have related info    sourcetype="WMI:FreeDiskSpace" PercentFreeSpace<10 Metrics index is meaning for?  and i look for splunk which portal to setup my alert  Splunk Search Explanation | mstats avg(LogicalDisk.%_Free_Space) AS "win_storage_free" WHERE index="<name of your metrics index>" host="<names of the hosts you want to check>" instance="<names of drives you want to check)>" instance!="_Total" BY host, instance span=1m Search metrics index(es) where perfmon disk space data is being collected and filter down to the desired host(s) to check. | eval storage_used_percent=round(100-win_storage_free,2) Convert percent storage free to percent storage used for readability. | eval host_dev=printf("%s:%s\\",host,instance) Create a new field that combines the host and disk drive. | timechart max(storage_used_percent) AS storage_used_percent BY host_dev Plot the storage used for each host and disk over time. Windows disk drive utilization nearing capacity - Splunk Lantern Monitor data through Windows Management Instrumentation (WMI) - Splunk Documentation 
How to route ECS Real Time logs in Splunk? please give an idea about how to route ECS real-time logs in Splunk enterprise.  I do this with Splunk cloud, but in Splunk enterprise don't have any op... See more...
How to route ECS Real Time logs in Splunk? please give an idea about how to route ECS real-time logs in Splunk enterprise.  I do this with Splunk cloud, but in Splunk enterprise don't have any option for that, please let me know if anyone has any idea about real-time logs in Splunk. please help       
Hello everyone, is it possible to collect logs from telegram chat to Splunk?  exist any ready solutions?
I've a couple of queries -  index="main"app="student-api" "tags.studentId"=3B70E5 message="Id and pwd entered correctly" | sort _time desc and index="main" app="student-api" "tags.decision"=... See more...
I've a couple of queries -  index="main"app="student-api" "tags.studentId"=3B70E5 message="Id and pwd entered correctly" | sort _time desc and index="main" app="student-api" "tags.decision"=SOP_REQUIRED "tags.studentId"=3B70E5 | sort _time desc I'd like to grab just the latest timestamp from both the results (and status code from one of them). However I'd like to do this reading the tags.studentId from a csv file (the fieldname is student_id and has ~100 entries). So the output should look like -  student_id| latest timestamp from 1st query| latest timestamp from 2nd query|status code from 2nd query I installed Lookup Editor.  Please let me know what next steps to follow (if there is alternative to Lookup Editor please suggest that too).  Thanks
Q. Splunk Universal Forwarder(Ubuntu) -> Splunk Enterprise(Ubuntu) I set inputs.conf after installing UF. And I ran the splunk service. But, Data does not accumulate when checked by the Enterprise... See more...
Q. Splunk Universal Forwarder(Ubuntu) -> Splunk Enterprise(Ubuntu) I set inputs.conf after installing UF. And I ran the splunk service. But, Data does not accumulate when checked by the Enterprise server. ㅠ.ㅠ 
I need to upgrade IBM was add-on I'm getting No spec file as you can see from attached. any one can help how to solve it. Thank you in advance 
We have been using this add-on with Splunk enterprise and jira service desk on-prem.. And we are moving to splunk cloud soon. Wondering if this app works with splunk cloud and the on-prem version of ... See more...
We have been using this add-on with Splunk enterprise and jira service desk on-prem.. And we are moving to splunk cloud soon. Wondering if this app works with splunk cloud and the on-prem version of jira service desk. Anyone who was able to successfully configure it this way
Howdy, I was wondering if anyone has any guidance on how to ingest data from Nagios Log Server? Prior to my arrival, we used Nagios-LS (I think 2.1) for several years to search and store logs from ... See more...
Howdy, I was wondering if anyone has any guidance on how to ingest data from Nagios Log Server? Prior to my arrival, we used Nagios-LS (I think 2.1) for several years to search and store logs from our devices. I've since implemented Splunk Enterprise and we are no longer supporting Nagios-LS and RHEL.  But I need to ingest the nagios-ls database into Splunk in order to enable searching of the historical logs (several TBs worth) but I'm not sure how to make that happen.
Hello Splunkers, Has anyone on-boarded Oracle cloud recently, Please share your experience and help with the right Add-on to be used as the one available on Splunk base says not supported anymore. ... See more...
Hello Splunkers, Has anyone on-boarded Oracle cloud recently, Please share your experience and help with the right Add-on to be used as the one available on Splunk base says not supported anymore. Thanks in advance, regards, Moh
As described in the title. We want to show p50, p90 and p99 of the metrics we created using information points. I don't know how can I do it. Need  you help. Thanks very much.
When I search using the Python SDK, I don't seem to see any fields, other than the very basic ones like host, source, sourcetype... in the results. Is there a way to get the search to return all the ... See more...
When I search using the Python SDK, I don't seem to see any fields, other than the very basic ones like host, source, sourcetype... in the results. Is there a way to get the search to return all the same pre-extracted fields I get when using the splunk search GUI ? like fields automatically extracted by Splunk, such as those that come in field1=value1 ...etc? Thanks, Mohamed.
I have a dashboard with this query:       | dbxquery connection=abcd-local query="SELECT DATE_FORMAT(date(dts),GET_FORMAT(DATE,'USA')) as rate_date, category_id, count(*) FROM url_directory W... See more...
I have a dashboard with this query:       | dbxquery connection=abcd-local query="SELECT DATE_FORMAT(date(dts),GET_FORMAT(DATE,'USA')) as rate_date, category_id, count(*) FROM url_directory WHERE category_id in (1,2,3) and dts >= now() - INTERVAL 1 MONTH group by rate_date, category_id" | eval category_id = case( category_id=="1", "Scam", category_id=="2", "Phishing", category_id=="3", "Malicious", 1==1, category_id)       It returns a table like this: Scam 5 2/1/2023 Phishing 18 2/1/2023 Malicious 23 2/1/2023 Scam 8 2/2/2023 Phishing 12 2/2/2023 Malicious 17 2/2/2023 I want to turn that table into a Pivot Table like this: Scam Phishing Malicious 2/1/2023 5 18 23 2/2/2023 8 12 17 ... What can I add to the query (or how can I change the query) to create that pivot table in a dashboard? Thanks!
I have a table of data with values like this: String         Numeric Client 1      99.9 Client 2      99.2 Client 3      93.8 Client 4      89.0 Client 5      72.9 and so on for miles.... I... See more...
I have a table of data with values like this: String         Numeric Client 1      99.9 Client 2      99.2 Client 3      93.8 Client 4      89.0 Client 5      72.9 and so on for miles.... I would like to add a column which outputs Decile rank of the clients based on their numeric value.  Can you point me in the right direction please? Thank you, -MD
Hello, I am using the splunk-utils SDK to try and search Splunk 9.0.4 from an external app. I am able to submit an initial search with a bearer token and can see my local instance receiving it and ... See more...
Hello, I am using the splunk-utils SDK to try and search Splunk 9.0.4 from an external app. I am able to submit an initial search with a bearer token and can see my local instance receiving it and returning a search SID. However, when I try to retrieve the results with getData, I get an error saying "Invalid Version: undefined"     // https://splunkui.splunk.com/Packages/splunk-utils/Search const n = await getData( sid, "results", { output_mode: "json_cols" }, { splunkdPath: serverURL, app: "myapp", owner: "nobody"}, headers )     Tracing the node_modules, I can see that the getData in search.js is trying to figure out the version, but I am not following how.  This seems to be the line that is determining if v2 should be used:     // Use V2 endpoints only where available // https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#Semantic_API_versioning var V2_ENDPOINTS = ['results', 'results_preview', 'events']; var endpointVersion = V2_ENDPOINTS.includes(endpoint) && gteSplunkVersion(_config.versionLabel, '9.0.2', '9.0.2209') ? '/v2' : '';     I have been looking at this example from @ryanoconnor and @nhaq but am not seeing how the right API version is being determined. https://github.com/splunk/react_search_example/blob/main/src/App.js Any ideas how to update my call to tell my getData what version to use?  I am assuming this is unhappy about a missing version but maybe it's something else... Full Error:     index.js:1 TypeError: Invalid Version: undefined at new SemVer (semver.js:19:1) at patch (patch.js:2:1) at gteSplunkVersion (search.js:53:1) at getData (search.js:326:1) at loadResults (splunksearch.js:80:1) at retrieveJob (splunksearch.js:56:1)      
I'm currently adding notes through a playbook via Python automation. I'll have a formatted string:     f'<a href="https://www.url.com"> Link Label </a>'     Then when add it to the t... See more...
I'm currently adding notes through a playbook via Python automation. I'll have a formatted string:     f'<a href="https://www.url.com"> Link Label </a>'     Then when add it to the task notes, the inside of the tag gets wiped out and I'm left with:     <a> Link Label </a>     I've seen workable links in the notes, how do I go about this?
Hello Splunk experts - I am trying to create a query that shows the difference in results from two distinct time frames.  Original search:   index=summary_nexpose | table AssetName, cveID, Exp... See more...
Hello Splunk experts - I am trying to create a query that shows the difference in results from two distinct time frames.  Original search:   index=summary_nexpose | table AssetName, cveID, Exposure | mvcombine delim=",", cveID | nomv cveID | eval numberCVEs=mvcount(split(cveID,",")) | rename numberCVEs AS "Number of CVEs"   What I need to do is add two timeframes: results from 30 days ago vs. results from today, and then show the difference in events between those two timeframes.
Regarding Windows Print Monitoring, what do each of the "operation" field values mean, i.e., add, set, baseline? For example, in the event below, what does "operation=set" mean? 04/21/2014 13:51:... See more...
Regarding Windows Print Monitoring, what do each of the "operation" field values mean, i.e., add, set, baseline? For example, in the event below, what does "operation=set" mean? 04/21/2014 13:51:59.486 operation=set type=Printer ComputerName=ops-sys-001 printer=HP LaserJet M3035 mfp PCL6 share= port=IPAddress driver=HP LaserJet M3035 mfp PCL6 comment=None location= separate_file= print_processor=hpzppwn7 data_type="RAW" parameters= status="normal" attributes=979 priority=6 default_priority=2 jobs=8 average_PagePerMinute=73  
If I am starting with this query: index=anIndex sourcetype=aSourcetype ( aJobName AND "COMPLETED OK" ) The job im intereted in runs once perday. I would like completion time on the Y Axis and the ... See more...
If I am starting with this query: index=anIndex sourcetype=aSourcetype ( aJobName AND "COMPLETED OK" ) The job im intereted in runs once perday. I would like completion time on the Y Axis and the day on the X axis. I found this previous piece of code but am not sure how to adapt to my query results ? | eval _time=strptime(timestamp,"%F %T.%Q") | eval t=split(substr(timestamp, 12, 8),":") | eval h=mvindex(t,0), m=mvindex(t,1), s=mvindex(t,2) | eval v=(h)+(m/100) | bin _time span=1d | chart max(v) over _time by job