All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I am making a time chart for how many apples have been picked EACH day. Yet, the data field representing the number of picked apples is a cumulative sum over the month. ex. Yesterday 5 app... See more...
Hello! I am making a time chart for how many apples have been picked EACH day. Yet, the data field representing the number of picked apples is a cumulative sum over the month. ex. Yesterday 5 apples were picked, today 3...instead of today's pick count=3 it is represented as 8 (5+3). Given this, how can I make the time chart values subtract the number of apples previously picked from the current number of apples picked so I can get the number of apples picked that day. Code: Index...... |bin span=1d _time |dedup _time Apple_type |stats sum(pick_count) as Picked by _time Apple_type |timechart values(Picked) by Apple_type span=1d |fillnull value=0 Results:  But I want!!! Please help ! Thank you.  
Hey community, Can someone help me out with a rex related question! Many many thanks! I am trying to rex the V1 out of a sample string and I have tried  catalogVersion\\":\\"(?P<catalogVersion>[^... See more...
Hey community, Can someone help me out with a rex related question! Many many thanks! I am trying to rex the V1 out of a sample string and I have tried  catalogVersion\\":\\"(?P<catalogVersion>[^ ]+)\\",   In regex101, it is working, However, I am getting a Unbalanced quotes error in Splunk. sample string \"transferDisconnectReasons\":null,\"catalogVersion\":\"V1\",\"accountCustomerDetails\"     Cheers!
Hello y'all! I'm trying to use the Single Value object, and build a search which count the number of the records and shows up.. but, for some reason it's not bring the right number.. Here is my... See more...
Hello y'all! I'm trying to use the Single Value object, and build a search which count the number of the records and shows up.. but, for some reason it's not bring the right number.. Here is my search:     index=redhatinsights | spath | spath path=events{} output=events | stats by _time, events, application, event_type, account_id, context.display_name | mvexpand events | eval _raw=events | kv | table _time | where relative_time(now(), "-30d") <= _time | timechart span=30d count(_time) as count | appendpipe [| stats count | where count=0 | addinfo | eval time=info_min_time." ".info_max_time | makemv time | mvexpand time | table time count | rename time as _time ]       for some reason is not bring all the records, and this time range doesn't make any affect to the result: What's is the right way to use this object and bring the total count of the records in the last 30 days? Thanks!    
Hello, I have a lots of records, some one has account_id field filled.. others has org_id field filled, and some ones both filled.... I'm trying to bring the table  both field (account_id and org_... See more...
Hello, I have a lots of records, some one has account_id field filled.. others has org_id field filled, and some ones both filled.... I'm trying to bring the table  both field (account_id and org_id) but, when I put the org_id into the stats by, bring only a few records, If I remove it, bring all the records, whats I'm doing wrong? Thanks !   Here is my search:       | spath | rename object.* as * | spath path=events{} output=events | mvexpand events | stats by timestamp, events, application, event_type, org_id, account_id, context.display_name | eval _raw=events | kv | table created_at_fmt, account_id, "application", "event_type", "context.display_name", title, url, org_id      
We recently upgraded Splunk Enterprise to 9.0.1 from 8.1.3. The UF's are still on 8.1.3. On the front end Health check, we are getting below error for Forwarder ingestion Latency on SH,CM as well as ... See more...
We recently upgraded Splunk Enterprise to 9.0.1 from 8.1.3. The UF's are still on 8.1.3. On the front end Health check, we are getting below error for Forwarder ingestion Latency on SH,CM as well as Indexers.  Root Cause(s): Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1581. Message from <some_value> Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1301539. Message from <some_value> Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1301539. Message from <some_value> Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 1311. Message from <some_value> Unhealthy Instances: - instance name1 - instance name 2 and so on 
Hi everyone, I have a suspicion that following this order of events, has caused an alert not to trigger when due: 1) I cloned the original alert for testing purposes 2) The 2 alerts find the sa... See more...
Hi everyone, I have a suspicion that following this order of events, has caused an alert not to trigger when due: 1) I cloned the original alert for testing purposes 2) The 2 alerts find the same result and function simultaneously 3) I disabled the cloned alert 4) Original alert not triggering (no email being sent, no events being logged on our alert index...) when Splunk search is being fulfilled. I repeated the search with the Splunk logic and results come back. I have no other explanation than the mentioned above. Has anyone seen this happen before?   Thank you in advance
Hi ,    Splunk adding additional double quotes when I export the data as csv  . When I use the exported file as eventgen sample file, it causing parsing  issues used when insert events using the ... See more...
Hi ,    Splunk adding additional double quotes when I export the data as csv  . When I use the exported file as eventgen sample file, it causing parsing  issues used when insert events using the sample file.   Any suggestions to this issue
Hi, not quite sure how to install this app on splunk cloud. appreciate any help!  https://splunkbase.splunk.com/app/2962#/overview
Hi , We have an add-on which will JSON format for data input. I can export the data as JSON format.   Could you please let me know how to generate events using Eventgen with the exported JSON s... See more...
Hi , We have an add-on which will JSON format for data input. I can export the data as JSON format.   Could you please let me know how to generate events using Eventgen with the exported JSON sample file.
I have suspicious that my outputs.conf configuration files are causing some unwanted data cloning in my forwarders. I am trying to make sense of some weird behavior I am observing, I am hoping someon... See more...
I have suspicious that my outputs.conf configuration files are causing some unwanted data cloning in my forwarders. I am trying to make sense of some weird behavior I am observing, I am hoping someone can fact-check my assumptions for validity, or tell me what if I am not understanding this issue correctly.  I have a UF on a syslog server. On the UF is a variety of apps, only a few of which possess a outputs.conf file.  If I search for outputs.conf files, these are the 4 that I find:     ./apps/SplunkUniversalForwarder/default/outputs.conf ./apps/comp_all_forwarder_outputs/local/outputs.conf ./apps/comp_all_outputs/local/outputs.conf ./system/default/outputs.conf     Based on the conf file hierarchy rules, I would expect that the two with ./local/outputs.conf would take priority over the other two with ./default/outputs.conf. Taking a look at each file, one is specifying indexer peers by FQDN, and the other is specifying the peers as IP addresses. Since both files have the same priority, and they are not the same conf file, would this create a scenario where Splunk sends data to the indexer tier twice (once for each outputs.conf file) cloning the data into the same indexing tier? /opt/splunkforwarder/etc/apps/comp_all_outputs/local/outputs.conf     [tcpout] defaultGroup = primary_indexers [tcpout:primary_indexers] server = spkidx01.comp.com:9997, spkidx02.comp.com:9997, spkidx03.comp.com:9997 autoLB = true       /opt/splunkforwarder/etc/apps/comp_all_forwarder_outputs/local/outputs.conf     [tcpout] defaultGroup = primary_indexers [tcpout:primary_indexers] server = 10.15.4.229:9997, 10.15.5.85:9997, 10.15.4.250:9997     The IP Addresses listed resolve to the FQDNs in the previous outputs.conf file. I would expect Splunk or maybe the OS would call these two separate outputs.conf files   TIA!    
Hi All, Currently we have a table like below , Target values are fixed for each row but Columns will added dynamically(it can be any month of calendar year) ex: June July August etc.. , these are ... See more...
Hi All, Currently we have a table like below , Target values are fixed for each row but Columns will added dynamically(it can be any month of calendar year) ex: June July August etc.. , these are actually coming from month field, after stats we used chart command to show month names as columns. target    June    July    100         100      96 98             96      100  97             92       93 96             90       91  now based on following conditions  where each cell value is need to compare  with corresponding target value  ex: 100 in June need to compare with  100 in target and 96 in June need to compare with 98 in target so on...           If June>= target -> show the june in green If june -  target < 5% -> show the june in blue If june - target > 5% -> show the june in red expected output   
Hi Splunkers, I have some doubts about forwarder buffer, both universal and heavy. The starting point is this: I know that, if an indexer goes down and it is receiving data by a UF,  this has a buf... See more...
Hi Splunkers, I have some doubts about forwarder buffer, both universal and heavy. The starting point is this: I know that, if an indexer goes down and it is receiving data by a UF,  this has a buffering mechanism to store data and send them to proper destination once it is up and running again. If I'm not wrong, the limits of this buffer can be set on a config file (I don't remeber well wich one). Now, the question are: 1. Even if the answer can be obvious, this mechanism is already available for HF? 2. How can I decide the maximum size of my buffer? is there a pre set limit or it depends on my environments?
Hello Splunkers, Everything is in the title, I've read the limits.conf documentation, [thruput] maxKBps = <integer> I know that UF have a default value of 256 KBps, but does an Heavy Forwarder a... See more...
Hello Splunkers, Everything is in the title, I've read the limits.conf documentation, [thruput] maxKBps = <integer> I know that UF have a default value of 256 KBps, but does an Heavy Forwarder also has this kind of limitation ? Regards, VERDIN-POL Gaétan
Hi folks, I need your support to build a search query to track the migration activity. We have a requirement to track the host which will be migrated from windows os to linux os. The search should ... See more...
Hi folks, I need your support to build a search query to track the migration activity. We have a requirement to track the host which will be migrated from windows os to linux os. The search should visualize the movement of migration activity. I have two lookup files, one is windows os host details. Another one is linux os host . So I need to compare how many machines migrated from Windows to Linux over the time. (last 7 days). | inputlookup windows.csv | fillnull value="windows" OS | inputlookup linux.csv append=1 | fillnull value="linux" OS | stats dc(OS) as count values(lastSeen) as LastSeen, values(FirstSeen) as Firstseen by hostname | where count > 1 | mvexpand OS The above query doesn't show expect the result  I would really appreciate, if someone has any ideas or suggestions on this.       
How to explicitly say App is configured with Python? * I have a custom setup page (dashboard) in my App. * I'm using Python rest-endpoint to configure the App and then executing the below REST endp... See more...
How to explicitly say App is configured with Python? * I have a custom setup page (dashboard) in my App. * I'm using Python rest-endpoint to configure the App and then executing the below REST endpoint (code) to ask Splunk that App has been successfully configured.   * This concept works fine in regular SHs. * We have faced an issue in Search Head Cluster (Splunk version 9.0.1) * Our app.conf with parameter is_configured = 1 is replicated across all the SHs. * Problem is that even though the conf file is being replicated perfectly fine, other SHs are still redirecting to the setup page. * Below are some other endpoints that I tried to use as an alternate but had no luck. /servicesNS/nobody/<app>/properties/app/install (pass is_configured as parameter) /services/apps/local/<app>?output_mode=json * BTW, all of the endpoints which I tried gave success responses but the above issue persisted.   Has anyone know the right way to do it? Is this a bug in 9.0.x?
Hi, Every time I apply shcluster bundle the deployer pushes all apps in /opt/splunk/etc/shcluster/apps to the SHC members, even if there hasen't been any modification the the app. I can see that th... See more...
Hi, Every time I apply shcluster bundle the deployer pushes all apps in /opt/splunk/etc/shcluster/apps to the SHC members, even if there hasen't been any modification the the app. I can see that the checksum is the same on the deployer and the SHC members but some how the deployer still pushes the app. We are using push mode merge_to_default and are on Splunk version 9.0.1. Every apply shcluster bundle takes several hours. Any ideas?
Hey guys. When I try to export a dashboard that I have with tables out from Dashboard Studio, the export includes scrollbars. Is there a way to remove them, or code the tables so that they expand wit... See more...
Hey guys. When I try to export a dashboard that I have with tables out from Dashboard Studio, the export includes scrollbars. Is there a way to remove them, or code the tables so that they expand with their content and break lines over pages?
Hey everyone. I had a dashboard on the old Dashboards panel and I didn't like the way it was exported. So I cloned it into Dashboard Studio and while the export looks nice now, it lost the functional... See more...
Hey everyone. I had a dashboard on the old Dashboards panel and I didn't like the way it was exported. So I cloned it into Dashboard Studio and while the export looks nice now, it lost the functionality to color table cells based on value. Anyone know how to make this work through the UI or through JSON?
I am getting this error when trying to run this command sudo useradd -m splunk. Can anyone help?    
Hi Team, I want a splunk search query for alert creation. My requirement is service Response time is > 3 seconds and  if it is continuous more than 10 min (> 10 min), then only I need to raise an ale... See more...
Hi Team, I want a splunk search query for alert creation. My requirement is service Response time is > 3 seconds and  if it is continuous more than 10 min (> 10 min), then only I need to raise an alert. In search query i tried to use the where option for the response time, but for time condition can't able to write the query. Below is my search query. please help me how to add the time condition value in query itself.   index=kpidata | eval ProcessingTime=ProcessingTimeMS/1000 | where ProcessingTime > 3