All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've been trying to package the app using the package toolkit and found an interesting issue. I have an older manifest file that now has been updated with the platform requirements section: "plat... See more...
I've been trying to package the app using the package toolkit and found an interesting issue. I have an older manifest file that now has been updated with the platform requirements section: "platformRequirements": { "splunk": { "Enterprise": ">=8.0.0", "Light": ">=8.0.0", "Free": ">=8.0.0" } and now when I run python -m slim package app-folder I'm getting this error: Version requirement includes no supported version of Splunk Enterprise: >=8.0.0 I'm not really sure what that means, could someone explain? If I remove that section, it will package the app just fine. My guess is that section is not really necessary in order to publish the app, as the supported platform version could be set on the Splunkbase page, if I'm not mistaken?
Hello community. I have a query and I don't know if what I'm thinking can be achieved and how or if Splunk already has a way to solve my question. My question is: How to know how many bytes a... See more...
Hello community. I have a query and I don't know if what I'm thinking can be achieved and how or if Splunk already has a way to solve my question. My question is: How to know how many bytes and fields extracted from my events in a particular index I am taking advantage of in searches? I would like to be able to identify if I am indexing more than what is really useful for my dashboards.
Hi. I would like to know if there is a simple way, via Splunk XML, to create a "for cycle" like routine, to generate panels or rows from a simple SPL search. A simple example, I have a look... See more...
Hi. I would like to know if there is a simple way, via Splunk XML, to create a "for cycle" like routine, to generate panels or rows from a simple SPL search. A simple example, I have a lookup my_hosts.csv with a list of hosts (host1 host2 host3) In a normal way, if I want to make one panel per each host, I need to enter XML code (or make via UI ok) and create every panel manually where, inside, I put my search (ex. search ... host=host1 "critical" | stats count) (x3) I would like to automatically generate those panels, from a simple |inputlookup my_hosts.csv , with one single for cycle way, something like this: for myhost in "| inputlookup my_hosts.csv"; do <--------------- panel single title $myhost$ /title search search ... host=$myhost$ "critical" | stats count /search /single /panel done <--------------- Is there a way? Thanks.
Hi Guys, I'm new to Splunk and trying to achieve the below requirements. Please help me. If the system name is not started with "AUH" and if it's part of "corporate_VPN" then give the result in a... See more...
Hi Guys, I'm new to Splunk and trying to achieve the below requirements. Please help me. If the system name is not started with "AUH" and if it's part of "corporate_VPN" then give the result in a table with the user name, computer name, and group name. user field has a common value between these 2 log lines but system name is from another log line. 2020-05-21 13:47:18 System4.Info 10.10.10.1 date=2020-05-21 time=13:47:45 devname="FW01-T1" devid="FG201" logid="39949" type="event" subtype="vpn" level="information" logdesc="SSL VPN statistics" action="tunnel-stats" tunneltype="ssl-tunnel" user="testuser" group="corporate_VPN" msg="SSL tunnel statistics" 2020-05-21 13:47:51 System4.Info 10.10.10.1 date=2020-05-21 time=13:47:51 devname="FW01-T1" devid="FG201" logid="45057" type="event" subtype="endpoint" level="information" logdesc="FortiClient connection added" action="add" status="success" connection_type="sslvpn" count=1 user="testuser" systemname="AUHWIN01" msg="Add a FortiClient Connection." Regards, Mohan
Search not executed: The minimum free disk space (995MB) reached for /opt/splunk/var/run/splunk/dispatch
I'm generating the output for Column 2, 3 from search. I want to add column to the start of the table. In the below example, I want to add the First Column "Product" with values "Product1" & "Produc... See more...
I'm generating the output for Column 2, 3 from search. I want to add column to the start of the table. In the below example, I want to add the First Column "Product" with values "Product1" & "Product2". Please shed some light here. Appreciate your help in advance. example Product Pass Count Fail Count Product1 5 5 Product2 1 4
Hi All, Can you please let me know what approach and steps would be in case the Splunk HEC implementation on indexers runs into some issues? This also includes getting syslog data from syslog-ng... See more...
Hi All, Can you please let me know what approach and steps would be in case the Splunk HEC implementation on indexers runs into some issues? This also includes getting syslog data from syslog-ng servers into indexers using HEC.
HI All, I have 3 indexers and a VIP load balancing the 3 indexers. I want to send the http and https events using HEC to the VIP so that it load balances between the three indexers and sends it to ... See more...
HI All, I have 3 indexers and a VIP load balancing the 3 indexers. I want to send the http and https events using HEC to the VIP so that it load balances between the three indexers and sends it to them. How to do this? Also, I do not want to send the HEC events to heavy forwarders and then it sends the it to the indexers. I have enabled the HEC directly on indexers. Do I have to create he same tokens across all the three indexers and then while sending it, include the that token and the vip ? Is it how it can be done? Or is there any other way to do so?
I created an alert w/ a basic search: index=_internal | stats count Cron Expression: */1 * * * * Alert condition choise customer, Trigger Conditions: search count>0 Trigger... See more...
I created an alert w/ a basic search: index=_internal | stats count Cron Expression: */1 * * * * Alert condition choise customer, Trigger Conditions: search count>0 Trigger Actions is Add to Triggered Alerts. But no alert messages are showing in the Trigger alerts page. I'm testing w/ Splunk versions 8.0.4. and 8.0.2. The same alert configration is normal in Splunk 7.
In my testing environment I have three main indexes that are specific to the data stored within them. I want to change the host value of all events by appending a string at the end of the host at ind... See more...
In my testing environment I have three main indexes that are specific to the data stored within them. I want to change the host value of all events by appending a string at the end of the host at index time (similar to a domain name, as an example) and I know how to do this with the props-transforms configuration, but I can only do it based on sourcetype, host, or source. Is there a way to do the same thing but based on index? For example, index1: host -> host.test1, something -> something.test1 index2: tmp -> tmp.test2 index3: hello -> hello.test3 I know how to do this based on the host field, for example: Props.conf [host::*] TRANSFORMS-appendname = append_name Transforms.conf [append_name] SOURCE_KEY = MetaData:Host REGEX = (.*) FORMAT = $1.test DEST_KEY = MetaData:Host And in this case, every host that forwards logs to my indexer would have their events stored as host = hostname.test How do I do the same thing but based on the target index?
I'm seeing the error below under messages in my Splunk enterprise console: Missing or malformed messages.conf stanza for TCPOUT:FORWARDING_BLOCKED_Indexer IP ADDress_default-autolb-group DC-Host N... See more...
I'm seeing the error below under messages in my Splunk enterprise console: Missing or malformed messages.conf stanza for TCPOUT:FORWARDING_BLOCKED_Indexer IP ADDress_default-autolb-group DC-Host Name_10 5/22/2020, 2:00:52 PM. The blocked host name belongs to a domain controller where I just deployed a UF. I'm not receiving any data from this forwarder. This is harder than I anticipated. I just need to audit security logs from my Domain controllers
Hi Folks, I have to compare(in percentage) and display the Year to Date value of 2020 & Year to Date value of 2019 for instance right now the Year to Date is May 22 2020 , so whatever the count... See more...
Hi Folks, I have to compare(in percentage) and display the Year to Date value of 2020 & Year to Date value of 2019 for instance right now the Year to Date is May 22 2020 , so whatever the count we have from January 1st 2020 to May 22 2020 vs the count of January 1st 2019 to May 22 2019 in percentage and a trend Thanks in advance
Hello, I have the following lines in logs [Kafka Server 4], shut down completed (kafka.server) [Kafka Server 4], start completed (kafka.server) The number before ] could be anything betwee... See more...
Hello, I have the following lines in logs [Kafka Server 4], shut down completed (kafka.server) [Kafka Server 4], start completed (kafka.server) The number before ] could be anything between 0-9 So I am trying the following index=* namespace=XXXXXX | search ["Kafka Server"\s\d]\,\ss\w+ And I am expecting the result to be [Kafka Server 4], shut [Kafka Server 4], start But I am not getting the desired result While I try index=* namespace=XXXXXX | search ["Kafka Server" I get [Kafka Server 4], shut down completed (kafka.server) [Kafka Server 4], start completed (kafka.server) [Kafka Server 5], shut down completed (kafka.server) [Kafka Server 6], start completed (kafka.server) ...... etc Could someone assist me please? Regards
Hi Team I have requirement to get api's triggered by per custkey in a single query query 1: /token host="test-host-*" custkey="*-*-*" | stats values(transid) as transid by custkey Note: tran... See more...
Hi Team I have requirement to get api's triggered by per custkey in a single query query 1: /token host="test-host-*" custkey="*-*-*" | stats values(transid) as transid by custkey Note: transids are possible to have more than. one per custmkey output looks like below ---------------------------------------- | custkey | transid | ---------------------------------------- | sdsdamndka | dkjndkwndkw | | | ljdosjdowqsj | ---------------------------------------- | hdshsihdiwh | ndskndkndkq | ---------------------------------------- query 2: some-srting (transid from query 1 output) | stats list(bearer) output looks like below euwgw4f56vxevd7vst6q8qq query 3: (bearer from query 2) "test-host-*" | table apis, token_api_status output looks like below ----------------------------- apis | token_api_status | ----------------------------- | /api/te1 | 200 | ----------------------------- | /api/te2. | 200. | ----------------------------- | /api/te3 | 400 | ----------------------------- I want to get all apis list and token_api_status per custmkey as shown below using a single query ----------------------------------- custkey | apis | token_api_status | ----------------------------------- dhsjhds | /api/te1 | 200 | | /api/te2 | 200 | ------------------------------------ rufidfhj | /api/jh1 | 200 | | /api/jh2 | 200 | -------------------------------------- So, Here I want relate custkey (query 1) with apis, token_api_status, could you please help me to achieve this.
Hi All, I am unable to login to Splunk Heavy Forwarder weblink access. and it showing that the page is not displayed. Please help me resolve the issue. Regards, Vijay .K
Hello, i use lookup to find IOC in log. in my lookup IOC.csv in FQDN column i have : lost.com and www.lost.com my log is : "srcip=9.9.9.9 dstip=8.8.8.8 query=www.lost.com" "index=test | loo... See more...
Hello, i use lookup to find IOC in log. in my lookup IOC.csv in FQDN column i have : lost.com and www.lost.com my log is : "srcip=9.9.9.9 dstip=8.8.8.8 query=www.lost.com" "index=test | lookup IOS.csv FQDN as query OUTPUT FQDN | search FQDN=* " the problem is that only IOC www.lost.com will come out and not lost.com I did find a solution with a request based on inputlookup but this is too greedy in resource do you have a proposal for me? thank you
Hi All, index = XXX Sourcetype="YYY" host="ZZZ_IN_*" | stats count by host will display all the nodes which are up and running in the tier. If any one ore two nodes are down, how can i create a... See more...
Hi All, index = XXX Sourcetype="YYY" host="ZZZ_IN_*" | stats count by host will display all the nodes which are up and running in the tier. If any one ore two nodes are down, how can i create an alert for it. Explaining about our problem: The nodes have both IIS and App logs which are stored in separate drives. Once the IIS logs are full also we will not receive an alert using metadata query, since splunkuniversal forwarder are capturing the logs from App logs. So we went for the above query but couldn't able to make out the result. Thanks in advance for your help. Regards, Parvathinathan CT
I have just upgraded my Splunk deployment server from 7.3 to 8.0 and after upgrading the web UI will not come up. I was trying to troubleshoot and was looking at the web_service.log but for some re... See more...
I have just upgraded my Splunk deployment server from 7.3 to 8.0 and after upgrading the web UI will not come up. I was trying to troubleshoot and was looking at the web_service.log but for some reason it is empty (0 bytes). I cannot figure out why it is not getting any logs or why the UI drops after the upgrade. Any suggestions would be greatly appreciated.
Hi Team, I am trying to use below command to delete the identities.conf file /opt/splunk/etc/apps/splunk_app_db_connect/local , however I am getting authentication failed/method not allowed. W... See more...
Hi Team, I am trying to use below command to delete the identities.conf file /opt/splunk/etc/apps/splunk_app_db_connect/local , however I am getting authentication failed/method not allowed. What is the exact syntax to delete this .conf file using API? curl -k -u admin:password --request DELETE https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/dbxproxy/identities -d \"{\"name\":\"user\",\"username\":\"appusername\",\"password\":\"apppassword\"}\" also tried without passing -d parameters. {"code":405,"message":"HTTP 405 Method Not Allowed"}
I see from the documentation that Splunk currently supports statsd metric types Gauge and Counter. Are there plans to support Timer in the near future? Is there a documented method for perhaps em... See more...
I see from the documentation that Splunk currently supports statsd metric types Gauge and Counter. Are there plans to support Timer in the near future? Is there a documented method for perhaps emulating this type in current support? In particular, we would like to send latency metrics to the statsd endpoint in Splunk. Doc: https://docs.splunk.com/Documentation/Splunk/8.0.4/Metrics/GetMetricsInStatsd Thanks.