All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have some data which in the below form: JOB EVENT TYPE TIME 1 1 A 20 1 1 B 15 1 1 C 10 1 2 A 15 1 2 B 10 1 2 C 20   I want to filter the data... See more...
Hello, I have some data which in the below form: JOB EVENT TYPE TIME 1 1 A 20 1 1 B 15 1 1 C 10 1 2 A 15 1 2 B 10 1 2 C 20   I want to filter the data only for those event which has the greater value of Type A. So, here in my example, Event 1 has value of A=20 and Event 2 has value of A=15. So, here Event 1 has value of A greater than value of A in Event 2. So I want to see the results of Event 1 only. My result should be something like below: JOB-NO EVENT TYPE TIME 1 1 A 20 1 1 B 15 1 1 C 10   Thanks in advance.
Hi,  A bit of a strange one that I can't workout.  I have a deployer server and a search head in one DC and 2 searchheads in another DC.  They are all part of the searchhead cluster and all share the... See more...
Hi,  A bit of a strange one that I can't workout.  I have a deployer server and a search head in one DC and 2 searchheads in another DC.  They are all part of the searchhead cluster and all share the same configs.  My problem is that the searchhead app has been deployed to all the searchheads.  The 2 searchheads located  in the same DC have the app and correct configs, but don't perform any field extractions.  Interestingly, if I open an event and select extract field, the parser sees the fields?  The searchhead on its own performs as expected.  I can see no errors.  Running btool confirms the file is also correct. It's the first time I've ever come across this.   TIA Steve
I have the following events in Splunk: _time                                                        Agent_Hostname      alarm                             status 2020-08-23T03:04:05.000-0700 m50-ups... See more...
I have the following events in Splunk: _time                                                        Agent_Hostname      alarm                             status 2020-08-23T03:04:05.000-0700 m50-ups.a_domain upsAlarmOnBypass raised 2020-08-23T03:07:16.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:07:16.000-0700 m50-ups.a_domain upsAlarmInputBad raised 2020-08-23T03:07:39.000-0700 m50-ups.a_domain upsAlarmOnBypass raised 2020-08-23T03:07:39.000-0700 m50-ups.a_domain upsAlarmLowBattery raised 2020-08-23T03:08:17.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:09:24.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:10:31.000-0700 m50-ups.a_domain upsAlarmOnBattery cleared 2020-08-23T03:10:32.000-0700 m50-ups.a_domain upsAlarmInputBad cleared 2020-08-23T03:11:12.000-0700 m50-ups.a_domain upsAlarmLowBattery cleared 2020-08-23T03:19:06.000-0700 m50-ups.a_domain upsAlarmInputBad raised 2020-08-23T03:19:06.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:19:13.000-0700 m50-ups.a_domain upsAlarmLowBattery raised 2020-08-23T03:20:10.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:21:16.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:22:22.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:23:29.000-0700 m50-ups.a_domain upsTrapOnBattery raised 2020-08-23T03:24:28.000-0700 m50-ups.a_domain upsAlarmInputBad cleared 2020-08-23T03:24:28.000-0700 m50-ups.a_domain upsAlarmOnBattery cleared 2020-08-23T03:25:09.000-0700 m50-ups.a_domain upsAlarmLowBattery cleared 2020-08-23T03:25:58.000-0700 m50-ups.a_domain upsAlarmOnBypass cleared My problem is how to compute records of incidents' duration for each host and each alarm type, for example, from the above events I'd have the following: start                                                           end                                                            Agent_Hostname      alarm 2020-08-23T03:04:05.000-0700 2020-08-23T03:25:58.000-0700 m50-ups.a_domain upsAlarmOnBypass 2020-08-23T03:07:16.000-0700 m50-ups.a_domain upsTrapOnBattery 2020-08-23T03:07:16.000-0700 2020-08-23T03:24:28.000-0700 m50-ups.a_domain upsAlarmInputBad 2020-08-23T03:07:39.000-0700 2020-08-23T03:25:09.000-0700 m50-ups.a_domain upsAlarmLowBattery where start is the earliest time when an alarm for a host is first raised, and end is the time when the same alarm/host is cleared. My second problem is how to find the biggest span of duration among those enclosed spans, ignoring those without end time. My question is how I can achieve within the framework of Splunk?
Hello, I'm setting a new splunk instance that is supposed to replace an old one. For the sake of validating that all is working correctly I used the "Forwarding and receiving" option to send that d... See more...
Hello, I'm setting a new splunk instance that is supposed to replace an old one. For the sake of validating that all is working correctly I used the "Forwarding and receiving" option to send that data from splunk 1 to splunk 2 - and it is working correctly. Now since I need in splunk 2 only the data that is sent to splunk 1 I want to filter it. I've tried to use the nullQueue/indexQueue techniques in indexer 2 props/transforms (configurations like this I've already uses hundreds times in with heavy forwarders) but it is not working in this case. Appreciate your help !  
I updated my Palo Alto Networks Add-on to version 6.3.1 and now I'm seeing the errors below in splunkd.log on the search head cluster members the add-on is deployed to.       09-03-2020 09:54:10.... See more...
I updated my Palo Alto Networks Add-on to version 6.3.1 and now I'm seeing the errors below in splunkd.log on the search head cluster members the add-on is deployed to.       09-03-2020 09:54:10.323 -0500 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 19, in handle\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 71, in _migrate\n self._migrate_conf_credential()\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 161, in _migrate_conf_credential\n conf_file, stanzas = self._load_conf(conf_file_name)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 178, in _load_conf\n stanzas = conf_file.get_all()\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/utils.py", line 159, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/conf_manager.py", line 241, in get_all\n key_values = self._decrypt_stanza(name, stanza_mgr.content)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/conf_manager.py", line 126, in _decrypt_stanza\n self._cred_mgr.get_password(stanza_name))\n File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads\n return _default_decoder.decode(s)\n File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File "/opt/splunk/lib/python3.7/json/decoder.py", line 355, in raw_decode\n raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 148, in init\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 634, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 36, in handleList\n self._migrate()\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 23, in handle\n 'Migrating failed. %s' % traceback.format_exc()\nsplunktaucclib.rest_handler.error.RestError: REST Error [500]: Internal Server Error -- Migrating failed. Traceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 19, in handle\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 71, in _migrate\n self._migrate_conf_credential()\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 161, in _migrate_conf_credential\n conf_file, stanzas = self._load_conf(conf_file_name)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 178, in _load_conf\n stanzas = conf_file.get_all()\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/utils.py", line 159, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/conf_manager.py", line 241, in get_all\n key_values = self._decrypt_stanza(name, stanza_mgr.content)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/conf_manager.py", line 126, in _decrypt_stanza\n self._cred_mgr.get_password(stanza_name))\n File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads\n return _default_decoder.decode(s)\n File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File "/opt/splunk/lib/python3.7/json/decoder.py", line 355, in raw_decode\n raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n\n 09-03-2020 09:54:10.323 -0500 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Migrating failed. Traceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 19, in handle\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 71, in _migrate\n self._migrate_conf_credential()\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 161, in _migrate_conf_credential\n conf_file, stanzas = self._load_conf(conf_file_name)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/splunk_aoblib/rest_migration.py", line 178, in _load_conf\n stanzas = conf_file.get_all()\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/utils.py", line 159, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/conf_manager.py", line 241, in get_all\n key_values = self._decrypt_stanza(name, stanza_mgr.content)\n File "/opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/conf_manager.py", line 126, in _decrypt_stanza\n self._cred_mgr.get_password(stanza_name))\n File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads\n return _default_decoder.decode(s)\n File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File "/opt/splunk/lib/python3.7/json/decoder.py", line 355, in raw_decode\n raise JSONDecodeError("Expecting value", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n". See splunkd.log for more details.     I took a look at rest_migration.py and it looks to me like it's looking for credentials from an older version of the TA that wasn't installed on my search heads (I'm not great with python, so I could be wrong). The add-on is deployed to a 4 member search head cluster with my deployer. Anyone have any ideas on how to resolve this? As it is, when i try to configure accounts or add-on settings in the app i just get a spinning wheel that says loading.
I am trying to schedule a report where it will give me the list of tickets created in a day. When i put the filter for yesterday from date picker, it is giving me all the tickets that had an event ge... See more...
I am trying to schedule a report where it will give me the list of tickets created in a day. When i put the filter for yesterday from date picker, it is giving me all the tickets that had an event generated yesterday from service-now. I only need the ones that opened on a particular day, not the ones that were updated. For example, my table(created and _time) has below rows and i want only the first one: created _time 2020-09-02 07:44:55 2020-09-02 14:52:24 2020-08-28 15:27:36 2020-09-02 02:27:00 2020-08-21 16:05:46 2020-09-02 02:25:59 2020-08-24 13:46:39 2020-09-02 00:53:54   Is there a way i can filter this table to get only those rows from created column that have same date as the ones in _time? This might be the simplest query of all, but I am a newbie here. Really appreciate if someone could help me.
Hi, I use AppDynamics Lite When I run the Dexter exe, I get the following error 401 (Unauthorized) { "Controller": "https://saastenant.saas.appdynamics.com ", "UserName": "saastenant@saastenant"... See more...
Hi, I use AppDynamics Lite When I run the Dexter exe, I get the following error 401 (Unauthorized) { "Controller": "https://saastenant.saas.appdynamics.com ", "UserName": "saastenant@saastenant", "UserPassword": "", "Application": ".*", "NameRegex": true, "Type": "APM" },
Hello All,                  I am using the row table expansion.js from dashboard examples. But I can only do 1 one level row expansion but I actually wanted a two-level row expansion. I am not go... See more...
Hello All,                  I am using the row table expansion.js from dashboard examples. But I can only do 1 one level row expansion but I actually wanted a two-level row expansion. I am not good with Javascript so just wanted to get it the two level nesting.             
Student_name    Status   marks john                               fail        30 han                               fail        10 ram                               fail        20 vish            ... See more...
Student_name    Status   marks john                               fail        30 han                               fail        10 ram                               fail        20 vish                               Pass        50 han                               Pass        90 ram                               Pass        50     The output should be -  as ram as passed in second attempt Student_name    Status   marks john                               fail        30 han                               fail        10
Hello, I have dashboard which contains three inputs and a single panel which one search. input 1: It's a dropdown with following name and values: a: 1 b: 2 input2: It's a dropdown with following... See more...
Hello, I have dashboard which contains three inputs and a single panel which one search. input 1: It's a dropdown with following name and values: a: 1 b: 2 input2: It's a dropdown with following name and values: Jan: "earliest=01/01/2020 00:00:00" latest=01/01/2020 24:00:00" Feb: "earliest=01/02/2020 00:00:00" latest=01/02/2020 24:00:00" input3: it's time when i choose input1 as 1, input2 should be shown and input3 needs to be hidden and whatever the value i choose from input2(month) need to be passed to the search. when i choose input1 as 2, input3 should be shown and input2 needs to be hidden and whatever the value i choose from input3(time) need to be passed to the search.
I need to prevent dashboard  panel going using 2 tabs is there a limit to panels or size of dashboards causing this 
I have an app which included a custom command which in turn has to cache some information on the indexer it runs. What is the best location to store this file, is $SPLUNK_HOME/etc/apps/<appname>/look... See more...
I have an app which included a custom command which in turn has to cache some information on the indexer it runs. What is the best location to store this file, is $SPLUNK_HOME/etc/apps/<appname>/lookups/<my_cache_file_name>.cache sensible? What will the indexer cluster do with this files? Just in case you wonder, the caches are to large to distribute them with the replication bundle as a lookup table.  They are filled by reading the kvstore from the calling search head with REST calls.
Will upgarding my splunk installation from version 7 to 8 it seems like the default admin user name and default password does not exsist anymore. Below is the code which i use: - name: check if pas... See more...
Will upgarding my splunk installation from version 7 to 8 it seems like the default admin user name and default password does not exsist anymore. Below is the code which i use: - name: check if password change is required     command: /opt/splunk/bin/splunk login -auth admin:changeme     register: login_result     ignore_errors: true   - name: change instance password     command: /opt/splunk/bin/splunk edit user admin -role admin -auth admin:changeme -password {{ admin_password }}     when: login_result is succeeded   The code above while running gives an error  "msg": "non-zero return code", "rc": 24, "start": "2020-09-03 13:06:30.914377", "stderr": "No users exist. Please set up a user.", "stderr_lines": ["No users exist. Please set up a user."]   Does anyone have faced the same issue?
Hi I need some help with the join command. I have 2 events as below - 1st Event -   2020-09-03 12:50:01,811|catalina-exec-173|INFO|LoggingService|RESPONSE status=404 api=[/checkout/details]" error... See more...
Hi I need some help with the join command. I have 2 events as below - 1st Event -   2020-09-03 12:50:01,811|catalina-exec-173|INFO|LoggingService|RESPONSE status=404 api=[/checkout/details]" error="detail_not_found"   2nd Event -   2020-09-03 12:54:50,915|catalina-exec-137|INFO|OrderService|Order placed successfully    Both events have a common field uniqueID. Now what I want is extract the uniqueID's for a scenario where the 1st event occurred but the 2nd event didn't.  Means number of users got the error and didn't proceed to place the order. I have the below query using which I was able to join the 2 events but what i want is opposite. Can someone advice.   index=my_test host="server-1" "OrderService|Order placed successfully" | join uniqueId max=0 [ search index=my_test host="server-1" status=[404] "api=[/checkout/details]" error="detail_not_found" | dedup uniqueId]  
Hello! I'm new to Splunk, and I would like to change the management port for only a single host from 8089 to 9089 due to a port conflict issue. I have read that it can be done from Settings > Ser... See more...
Hello! I'm new to Splunk, and I would like to change the management port for only a single host from 8089 to 9089 due to a port conflict issue. I have read that it can be done from Settings > Server settings > General settings. But from my understanding this way I will have to change the port on every host which is not what I want, I want to change it only on a single host that has the port conflict issue. if this is possible, it will appreciated to provide a detailed steps. Thanks.
In summary, we are trying to generate a fully automated report generator (dynamic reports) via the data we ingested that we can just print and send to customers. I know this isnt really what Splunk i... See more...
In summary, we are trying to generate a fully automated report generator (dynamic reports) via the data we ingested that we can just print and send to customers. I know this isnt really what Splunk is intended for, but we've managed to get really close with HTML thrown between tables and such. Some things that would inherently help (in case a Splunk developer sees this): - Have a supported "header/footer" on tables and visualizations. Allow them to be left, center, right aligned as desired. This allows us to add proper classifications to the tables inheritly. Right now we work around this with HTML. - HTML Page Break Support. I'm having to add a bunch of html blocks with breaks in them. If I do in one block it tries to auto position it together, so separating it allows it to break apart better. Personally I like this feature because I can keep text together that I need to. - .Doc or .Docx export support. If we could easily edit the output then most this isn't a huge problem. Anyways, onto the problem: I have a dashboard developed with everything I need. Problem is, getting it into a artifact. I have toyed with every export I can find and each one has different issues. They are order with the most effective at the bottom and their issues: - Browser --> Print --> PDF: Doesn't format right at all. Grabs all text seen as expected (menus, buttons, etc). Threw this option out quickly. - Splunkbase Smart Export App --> Works really well in capture and expectations. It keeps formatting and colors, etc. However, it prints based on pane. Meaning you have to separate apps into separate panes manually to get alignment right. If the pane is bigger than the page, it doesn't roll to the next page; it cuts it off there. So I moved away from this because some tables may be larger than a single page. Additionally, it doesn't support third party apps, leaving a blank page (to include a blank page for itself lol). No page header/footer. - Export --> PDF: Adds a bunch of unexplainable spacing between some tables and the next HTML block. Haven't found a link into why. My align right HTML doesn't align right, making the table classifications wrong. Doesn't support third party applications but formats all around it correctly, allowing us to still add it manually on top with little added work. No page header/footers. Manual page breaks. This is one of the better ones. Just don't know how to fix the major problems. - Export --> Print --> PDF: This is the closest one I can get. It keeps third party apps, I can add page classifications through the print support, and it keeps the HTML formatting! I still have to manual page break as I do with almost all of them. I literally have a workable product from this one but the boss doesn't like the table formats it does. But after messing with HTML and overwriting the default CSS settings of the tables I can format it in a acceptable form the dashboard (cant figure out the column grid lines yet). But the Splunk print export appears to do its own thing with the tables; it ditches the coloring and auto-sizes columns (usually in a bad way). Ironically, the cell coloring through Splunk table formats stays if I use it... So basically, does anyone know how to fix any of the issues listed. I'm really close with the print job but am open to any of them if people have fixes for those as well.
Hello, I need to highlight two countries in the choropleth map based on the count .  index="index=1" | table atomName status|eval country= if(atomName == "APAC", "INDIA", "USA") |stats count by cou... See more...
Hello, I need to highlight two countries in the choropleth map based on the count .  index="index=1" | table atomName status|eval country= if(atomName == "APAC", "INDIA", "USA") |stats count by country |stats count by country | inputlookup geo_countries | geom geo_countries | where featureId=country The above query is throwing error. Please do suggest how I can write the query
Hello,  Im working with spring boot, and have the following annotation over a method: @Timed(value = "api.rest.get-account-msgs",histogram = true,percentiles = {0.5, 0.95, 0.99}) and I want to... See more...
Hello,  Im working with spring boot, and have the following annotation over a method: @Timed(value = "api.rest.get-account-msgs",histogram = true,percentiles = {0.5, 0.95, 0.99}) and I want to view performance (percentiles of this method) in splunk dashboard. After typing the following in search query:  index=<name of my index>* source=/opt/services_logs/splunk/<name of log file>.log type=timer name="api_rest_get-account-msgs" Im getting all the results of the following method occurrence but I cannot find how to view the performance of this method over time as percentiles. Really getting desperate, would be thankful for any help! 
Can Splunk be intergrated with GIT repository? I would like to use simple UI Splunk tools to define indexes, data inputs, etc using Splunk UI and store these configurations in a GIT repository. The ... See more...
Can Splunk be intergrated with GIT repository? I would like to use simple UI Splunk tools to define indexes, data inputs, etc using Splunk UI and store these configurations in a GIT repository. The reason is that GIT repository is a standard for NN, all configurations should be stored there and distributed to various Splunk servers within the company.
Hello im trying to delete some records from my kvstore collection when running this command (there are more keys i just didnt wanted it to be long) : | inputlookup kv_alerts_prod | where ( ( _key... See more...
Hello im trying to delete some records from my kvstore collection when running this command (there are more keys i just didnt wanted it to be long) : | inputlookup kv_alerts_prod | where ( ( _key="5f401a77cc0ddc039e76ade4" ) OR ( _key="5f401a77cc0ddc039e76ade5" ) OR ( _key="5f401a78cc0ddc039e76ade6" ) OR ( _key="5f401a78cc0ddc039e76ade7" ) OR ( _key="5f401a7bcc0ddc039e76ade8" ) OR ( _key="5f401a81cc0ddc039e76ade9" ) OR ( _key="5f401a84cc0ddc039e76adea" ) OR ( _key="5f401a84cc0ddc039e76adeb" ) OR ( _key="5f401a84cc0ddc039e76adec" ) OR ( _key="5f401a84cc0ddc039e76aded" ) OR ( _key="5f401a84cc0ddc039e76adee" ) OR ( _key="5f401a84cc0ddc039e76adef" ) OR ( _key="5f401a84cc0ddc039e76adf0" ) OR ( _key="5f401a84cc0ddc039e76adf1" ) OR ( _key="5f401a85cc0ddc039e76adf2" ) | outputlookup kv_alerts_prod   im getting this error : KV Store output failed with err: Request exceeds API limits - see limits.conf for details. (Batch save size=105742989 too large)   my limits.conf file looks like this : [kvstore] max_size_per_result_mb = 100000 max_size_per_batch_result_mb = 1000000000 max_size_per_batch_save_mb = 10000000000   what can i do ?