All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am testing PAVO Getwatchlist Add-on 1.1.7 on Splunk Enterprise 9.0.0 It looks working almost fine. I need to use additional columns and set configration in getwatchlist.conf like following. 1=ad... See more...
I am testing PAVO Getwatchlist Add-on 1.1.7 on Splunk Enterprise 9.0.0 It looks working almost fine. I need to use additional columns and set configration in getwatchlist.conf like following. 1=additional1 2=additional2 3=additional3 ... I expected that field name of additional columns become "additional1", "additional2" ... But, it became "1", "2", ... I have tried to modify getwatchlist.py like following. $ diff getwatchlist.py getwatchlist_fix.py 388c388 < row_holder[add_col] = self.format_value(row[int(add_col)]) --- > row_holder[add_cols[add_col]] = self.format_value(row[int(add_col)]) After that, the field names became "additional1", "additional2" ... as expected. I am not sure which behavior is correct. But, I feel "additional1", "additional2" ... are better.
I've tried to explore every links and docs in the AppDynamics website, but i still can't find any related information 1. Is there any user limit on AppDynamic? Or we have freedom to create as many u... See more...
I've tried to explore every links and docs in the AppDynamics website, but i still can't find any related information 1. Is there any user limit on AppDynamic? Or we have freedom to create as many user as we want, per controller? And what's the controller term actually means? Is it counted as an account or a license that I've purchased? 2. Is there any data ingest or data usage limit on AppDynamic Pro plan, like limited each month only 300GB and i have to pay more if i need to increase it, or is the one i found at your docs is correct (100GB/Day/User) Regards, Yohan
Bear with me as this is the first time im doing this. I configured a vmware host to send its events via syslog to splunk. It is working. Raw logs are stored in /opt/syslog/192.168.x.x in four differ... See more...
Bear with me as this is the first time im doing this. I configured a vmware host to send its events via syslog to splunk. It is working. Raw logs are stored in /opt/syslog/192.168.x.x in four different types (local, daemon logs etc) Now, how do I index these logs? How do I create a new index=vmware which will start index raw logs and I can start searching? Googled a bit but I cant find a step-by-step tutorial
Hi all I have a dashboard made with Dashboard Studio with multiple tables and graphs. For it to be more interactive I would like to be able to click in my visualizations and update a token that i... See more...
Hi all I have a dashboard made with Dashboard Studio with multiple tables and graphs. For it to be more interactive I would like to be able to click in my visualizations and update a token that is shared across the dashboard. I have created a dynamically populated multiselect and drilldowns in my visualizations. The drilldown recognizes the token of the multiselect. When i click, the visualizations update for a second and then reset. The token seems to update and gets immediately reset by the multiselect field which does not update. Similar behaviour occurs for the dropdown input. Does anyone know why it happens and how to fix it?
Hi, I get data from DB using dbxquery. I set the time filter by:  WHERE time BETWEEN DATE_TRUNC('hour',NOW()) - INTERVAL '4 HOURS' AND DATE_TRUNC('hour',NOW()) - INTERVAL '2 HOURS' I use DATE_T... See more...
Hi, I get data from DB using dbxquery. I set the time filter by:  WHERE time BETWEEN DATE_TRUNC('hour',NOW()) - INTERVAL '4 HOURS' AND DATE_TRUNC('hour',NOW()) - INTERVAL '2 HOURS' I use DATE_TRUNC in order to get data from exact hour (7:00-9:00, insteads of 7:10-9:10 or example) After that, using Splunk, I make a span = 2h In the alert, I want to send it every 2 hours. There is a problem from 4:00 - 6:00 but at 9:30, I don't receive any alert (because there is nothing return from the search) However, now, at 10:10, when I run the search, it sort the result that I want. _time id count 2022-10-14 04:00 123 0 2022-10-14 06:00 123 0   Effectively, there is no data for id "123" in the filter period in SQL query.  Do you have any idea how can I do it more generally, not filter time like what I am doing now in the SQL, to avoid this problem? Or a way to filter time by Splunk, not by SQL Here is my search: |dbxquery connection="database" query=" SELECT id as id, time as time, count(*) as count FROM table WHERE time BETWEEN DATE_TRUNC('hour',NOW()) - INTERVAL '4 HOURS' AND DATE_TRUNC('hour',NOW()) - INTERVAL '2 HOURS' GROUP BY id, time" |lookup lookup.csv id OUTPUT id |eval list_id = "123,466,233,111" |eval split_list_id= split(list_id ,",") |mvexpand split_list_id |where id=split_list_id |eval _time=strptime(time,"%Y-%m-%dT%H:%M:%S.%N") |timechart span=2h count by id | untable _time id count | makecontinuous | where count = 0 |stats max(_time) as date_time by id |eval date_time=strftime(date_time,"%Y-%m-%dT%H:%M:%S")
Hello everyone, I am trying to install forwarder on Linux chown -R splunk:splunk /opt/splunkforwarder sudo -u splunk sh -c "/opt/splunkforwarder/bin/splunk set deploy-poll deployment_address:80... See more...
Hello everyone, I am trying to install forwarder on Linux chown -R splunk:splunk /opt/splunkforwarder sudo -u splunk sh -c "/opt/splunkforwarder/bin/splunk set deploy-poll deployment_address:8089"   and getting error: sh: /opt/splunkforwarder/bin/splunk: Permission denied   What should I look? What can be problem?
please help I need to compare and display the last 30days data and last 15mnts data 
Hello Kindly assist me in this query/solution. I have a long list of IPs that logged in. Out of this list, I want to know the percentage of only 5 IPs. When I use this query ---My base query... See more...
Hello Kindly assist me in this query/solution. I have a long list of IPs that logged in. Out of this list, I want to know the percentage of only 5 IPs. When I use this query ---My base query----         | search NOT IPs IN ("IP.A", "IP.B", "IP.C", "IP.D", "IP.E") | stats count by IP | eventstats sum(count) as perc | eval percentage= round(count*100/perc,2) | fields - perc         It gives me a table like this IP Count Percentage IP.A 52 37 IP.B 35 26 IP.C 22 18 IP.D 44 17 IP.E 11 2 The Total percentage =100%. But when I use this query ---My base query---- ip=*         | stats count by IP | eventstats sum(count) as perc | eval percentage= round(count*100/perc,2) | fields - perc         I get about 5 pages of all the list of the IPs and their respective percentages including the IP.A to IP.E in a table which all together totals 100% but the percentages of IP.A to IP.E changes completely. The 5 IPs shouldn't give me a 100%. It should a percentage fraction of the whole. Please help.
I'm trying to do something pretty straightforward, and have looked at  practically every "average" answer on Splunk Community, but no dice.  I want to compare total and average webpage hits on a line... See more...
I'm trying to do something pretty straightforward, and have looked at  practically every "average" answer on Splunk Community, but no dice.  I want to compare total and average webpage hits on a line chart.  I calculated and confirmed the standard (fillnull value=0) and cumulative (fillnull value=null) averages with the following:   host.... | bin _time span=1h | eval date_hour=strftime(_time, "%H") | stats count as hits by date, date_hour | xyseries date, date_hour, hits | fillnull value=0 |appendpipe     [| untable date, date_hour, hits      | eventstats avg(hits) as avg_events by date_hour      | eval "Average Events"= avg_events      | xyseries date date_hour avg_events      | head 1      | eval date="Average Events"]     How do I plot hits and avg_events on a line chart by date_hour?  Also,  if there is less convoluted SPL to get the same results, I'd love to know that as well—because I think I found where Google ends.   Thanks! 
HI,  I created a new indexes for our another ITSI environment. But I lost some of the entities which I already fixed. Now my issue is that the healthscore status is not showing in the deep dive. C... See more...
HI,  I created a new indexes for our another ITSI environment. But I lost some of the entities which I already fixed. Now my issue is that the healthscore status is not showing in the deep dive. Configure multiple ITSI deployments to use the same indexing layer - Splunk Documentation  Noticed that this search is not returning those 2 important fields alert_color and alert_value. suspecting this is the issue but not sure. And why those fields are not showing? search: 'get_full_itsi_summary_kpi(SHKPI-bee4cb6f-8691-4ee2-97fd-f40ed45f4acd)' service_level_kpi_only Thanks, adhoc123
Hi community, I am trying to write a query that looks for bulk email (say >50) from a single sender to multiple recipients, that has a unique subject. Sender Recipient Subject bob... See more...
Hi community, I am trying to write a query that looks for bulk email (say >50) from a single sender to multiple recipients, that has a unique subject. Sender Recipient Subject bob @ scamm . com alice @ mycompany .net spam for alice bob @ scamm . com jane @ mycompany .net spam for jane bob @ scamm . com fred @ mycompany .net spam for fred   I can add this to my search:     | stats count by subject sender recipient | search count>50     but I just want to see results where the subjects are unique, but the sender is the same.   Ideally I'd like to have it spit out a table of the sender, subject(s) and recipient(s) Thank you 
Hello, My Splunk is no longer ingesting emails from our O365 email account anymore. I was not the person to set this up and need assistance in troubleshooting. Can anyone provide assistance/guidanc... See more...
Hello, My Splunk is no longer ingesting emails from our O365 email account anymore. I was not the person to set this up and need assistance in troubleshooting. Can anyone provide assistance/guidance.     There is also an error that is showing in regards to the KvStore "KV Store process terminated abnormally (exit code 14, status exited with code 14).", which I'm not sure is related or not. We have a search head cluster setup with 2 indexers that are not clustered.
Hi Everyone, I run into an issue today in SIT where TIV0 was inaccessible because a similar directory was full. I'm trying to set one alert for DEV and one for SIT and the folder path for each en... See more...
Hi Everyone, I run into an issue today in SIT where TIV0 was inaccessible because a similar directory was full. I'm trying to set one alert for DEV and one for SIT and the folder path for each environment is : DEV:/mms/ora1200/u00/oracle. SIT:/mms/ora1201/u00/oracle. this is what i have so far : index=A "/mms/ora1200/u00/oracle" source= B | stats latest(storage_used*) as storage_used* latest(storage_free*) as storage_free* by host mount | where storage_used_percent>90 | eval storage_used=if(storage_used>1000,(storage_used/1000). " GB" ,storage_used+" MB"), storage_free=if(storage_free>1000, (storage_free/1000, (storage_free/1000). " GB", storage_free+" MB") Any feedback will be appreciated.  
I have two streams of data coming into a HEC.  one has call direction (i.e. inbound) and the other has call disposition (i.e. allowed).  at first i was joining these streams (join), but found a gre... See more...
I have two streams of data coming into a HEC.  one has call direction (i.e. inbound) and the other has call disposition (i.e. allowed).  at first i was joining these streams (join), but found a great thread in the community suggesting using stats and so with some cleanup, i have something like this:   index="my_hec_data" resource="somedata*" | stats values(*) as * by id   which works great, and may not even be related to my actual question, but next I want to count by day, cool, so just timechart it, but i suppose my real question is Is that the most efficient way to count calls by day?  or should i do some higher level aggregation somehow? i don't even know if that makes sense, but if there are 2M calls a day and I go back 30d, is "counting 60M rows" the best way to display 'events per day?'
Hello everyone,   As i written in title, i started using Splunk recently. I would like to know if someone could help me: I have created a dashbord for analyze windows events. I have a query lik... See more...
Hello everyone,   As i written in title, i started using Splunk recently. I would like to know if someone could help me: I have created a dashbord for analyze windows events. I have a query like this: index=windows sourcetype in (...) EventCode=* | stats count by EventCode Using this search, i get a table with in a column the EventCode, and in the other column i have the count of how many times that specific eventcode has "appeared". And so far everything is fine. How can i retrieve the number of all windows hosts? I can't figure it out, i'm trying in a lot of ways but nothing Thnks for the help
Hi folks, I'm trying to get all saved searches from my SHC and ES SH running the following SPL, but I'm unable to see the ones from my ES SH (the SPL is being run on the SHC). | rest /servicesNS/... See more...
Hi folks, I'm trying to get all saved searches from my SHC and ES SH running the following SPL, but I'm unable to see the ones from my ES SH (the SPL is being run on the SHC). | rest /servicesNS/-/-/saved/searches When running the SPL appears the following message: Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability. The I tried running the following SPL and the message disappeared, however, I'm not able to see the saved searches from my ES SH.: | rest splunk_server=local /servicesNS/-/-/saved/searches   Any idea about this? Is this because of the missing capability? Am I restricted to make this search?   Thanks in advance.
I see lot of developers using splunk, but many times log files simply kept growing without limit due to debug enablement and chronic failure in environment which takes long time to fix .  It is impor... See more...
I see lot of developers using splunk, but many times log files simply kept growing without limit due to debug enablement and chronic failure in environment which takes long time to fix .  It is important splunk provide admins an ability to put caps on ingestion for certain data t ype
I'm having issues with eventtypes not correctly being applied from VMware Carbon Black Cloud ingest that I can't figure out, as each search in the chain successfully finds events. These are the three... See more...
I'm having issues with eventtypes not correctly being applied from VMware Carbon Black Cloud ingest that I can't figure out, as each search in the chain successfully finds events. These are the three eventtypes that chain together. The first two apply correctly (vmware_cbc_base_index, vmware_cbc_alerts), but not the third (vmware_cbc_malware). From eventtypes.conf:     [vmware_cbc_base_index] search = index=carbonblack_audit [vmware_cbc_alerts] search = eventtype=vmware_cbc_base_index sourcetype="vmware:cbc:s3:alerts" OR sourcetype="vmware:cbc:alerts" [vmware_cbc_malware] search = eventtype=vmware_cbc_alerts threat_cause_threat_category="*MALWARE*" NOT threat_cause_threat_category="*NON_MALWARE*"    When I use the search in the third eventtype (vmware_cbc_malware), I do get events. Search: eventtype=vmware_cbc_alerts threat_cause_threat_category="*MALWARE*" NOT threat_cause_threat_category="*NON_MALWARE*" | stats count by eventtype    eventtype count vmware_cbc_alerts 65 vmware_cbc_base_index 65 Can anyone help me figure out why this third eventtype is not being applied?  
I need to show a tooltip on a panel, to let users know that clicking on the value will take them to a drill down. Is there a way to achieve this without using Javascript? This is the code for the p... See more...
I need to show a tooltip on a panel, to let users know that clicking on the value will take them to a drill down. Is there a way to achieve this without using Javascript? This is the code for the panel from the source.     <panel> <title>Supported Platforms Count</title> <single> <title>This metric gives the count of platforms supported by Integration platform engineering team</title> <search> <query>| inputlookup Integrations_Platform_List.csv | stats count</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">all</option> <option name="height">200</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.size">medium</option> <option name="trellis.splitBy">_aggregation</option> <drilldown> <link target="_blank">search?q=%7C%20inputlookup%20Integrations_Platform_List.csv%0A%7C%20stats%20count&amp;earliest=$global_time.earliest$&amp;latest=$global_time.latest$</link> </drilldown> </single> </panel>       Thanks,
I have a splunk query, in which my intention is to get all ipAddress for which "EVENT A" occurred in last 22 hours starting from 4 hours before,  but "EVENT B" is not there in last 24 hours for same ... See more...
I have a splunk query, in which my intention is to get all ipAddress for which "EVENT A" occurred in last 22 hours starting from 4 hours before,  but "EVENT B" is not there in last 24 hours for same IpAddress. It is known that "Event A" will have one occurrence for Ip address,(if any), but "Event B" will have ,multiple occurrences. Following is the query:     index=prod-* sourcetype="kube:service" "Event A" earliest=-24h latest=-4h |table IpAddress | search NOT [search index=prod-* sourcetype="kube:service" AND ("Event B") earliest=-24h latest=-0h |table IpAddress ]     Why the first query is not working fine? This does not work fine and return the results, even if, there is an Ip address for "Event A" and multiple events for same Ip address "Event B". But if I add, dedup IpAddress to inner search not query, then it works fine. Updated query:     index=prod-* sourcetype="kube:service" "Event A" earliest=-24h latest=-4h |table IpAddress | search NOT [search index=prod-* sourcetype="kube:service" AND ("Event B") earliest=-24h latest=-0h |dedup IpAddress|table IpAddress ]