All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everybody, When parsing a long string containing escaped double-quotes I get this error: Error in 'rex' command: regex="^(?([^"]|\")) has exceeded the configured depth_limit, consider raising... See more...
Hi everybody, When parsing a long string containing escaped double-quotes I get this error: Error in 'rex' command: regex="^(?([^"]|\")) has exceeded the configured depth_limit, consider raising the value in limits.conf. Steps to reproduce: | makeresults | eval value="dolor commodo sit amet Sed fringilla nisi et augue condimentum, finibus hendrerit massa egestas Phasellus erat nunc, placerat vitae molestie quis, tempor ac ipsum Phasellus malesuada risus risus, sed lobortis purus vestibulum et Curabitur tempus tincidunt faucibus Aenean sed ipsum eleifend molestie, eros non at accumsan odio turpis sed Integer sed egestas nibh, nec fringilla quam orci eu ipsum Ut at ligula nec metus cursus condimentum eget id eros Pellentesque habitant morbi tristique senectus et \"netus\" et malesuada fames ac turpis egestas Nunc mollis neque eros, eu luctus augue iaculis a Aenean maximus varius erat sed auctor Duis consectetur luctus ligula fringilla quam" | rex field=value "^(?<RESULT>([^\"]|\\\")*)" | table RESULT My regex ([^\"]|\\") generate capturing group who exceed the configured capacity. I tried to disable capturing group with this syntax (?:[^\"]|\\")* but I still have a limit eg: | makeresults | eval value="dolor commodo sit amet Sed fringilla nisi et augue condimentum, finibus hendrerit massa egestas Phasellus erat nunc, placerat vitae molestie quis, tempor ac ipsum Phasellus malesuada risus risus, sed lobortis purus vestibulum et Curabitur tempus tincidunt faucibus Aenean sed ipsum eleifend molestie, eros non at accumsan odio turpis sed Integer sed egestas nibh, nec fringilla quam orci eu ipsum Ut at ligula nec metus cursus condimentum eget id eros Pellentesque habitant morbi tristique senectus et \"netus\" et \"malesuada\" fames ac turpis egestas Nunc mollis neque eros, eu luctus augue iaculis a Aenean maximus varius erat sed auctor Duis consectetur luctus ligula fringilla quam laoreet consectetur tempus, sapien mi pretium ipsum, et rutrum lorem orci vel ipsum. Vivamus \"vitae\" rhoncus erat, vel \"blandit\" libero. Vestibulum dictum arcu eu ligula \"dignissim\", eu efficitur \"ante\" faucibus. Nam eu lacus rhoncus, tempor lorem at, ultrices elit. Nulla facilisi. Ut feugiat lobortis \"orci\". Proin at ultricies metus. Donec pharetra justo nec sapien hendrerit lacinia. \"Vestibulum\" ornare nibh diam, in ullamcorper massa ultricies et. Quisque fringilla dolor ornare nibh rhoncus vestibulum. Integer porttitor enim nec elementum sagittis. Aliquam ut semper diam, non vehicula risus. Proin ut fringilla massa. Mauris eu dolor ex. Vivamus sit amet diam sapien." | rex field=value "^(?<RESULT>(?:[^\"]|\\\")*)" | table RESULT If I remove two \" in the string, it works Any idea ?
Hello, I have this query: index=prod eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback-ne... See more...
Hello, I have this query: index=prod eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback-net" OR eventtype="csm-messages-dhcpd-write-zero-leases" OR eventtype="csm-messages-dhcpd-eth1-nosubnet-declared" | transaction maxpause=2s maxspan=2s maxevents=5 | eval Max_time=(duration + _time) | eval Min_time=(_time) | table _time,eventcount, eventtype ,Min_time, Max_time,tail_id,kafka_uuid | foreach eventtype [eval flag_eventtype=if(eventcount!=5,"no", "yes")] now i have a lookup table and i want to set parameters in my query that will be taken from the lookup table. for example , instead of searching eventtype="csm-messages-dhcpd-lpf-eth0-listening" OR eventtype="csm-messages-dhcpd-lpf-eth0-sending" OR eventtype="csm-messages-dhcpd-send-socket-fallback-net" OR eventtype="csm-messages-dhcpd-write-zero-leases" OR eventtype="csm-messages-dhcpd-eth1-nosubnet-declared" i want to take the values of the eventtype from the lookup table how can i do that ? thanks
Based on this search: source="abc.log" | rex "\"duration\" : (?<duration>\d+)" | rex "\"correlation\" : \"(?<correlation>[^\"]+)" | where duration > 50000 | table correlation duration I w... See more...
Based on this search: source="abc.log" | rex "\"duration\" : (?<duration>\d+)" | rex "\"correlation\" : \"(?<correlation>[^\"]+)" | where duration > 50000 | table correlation duration I want to get A) Either a mail to mymail@hotmail.com every time there is a hit OR if that is not possible B) Every hour run the query and mail me any matches. I have searched a lot on this forum, and I cannot seem to find any place where I can do this. Any help is appreciated greatly!
We are using Splunk ES version 5.2. The size of the indentities_expanded CSV file is over 350MB and is causing issues with the search bundle replication. Can this lookup be changed to a kvstore inste... See more...
We are using Splunk ES version 5.2. The size of the indentities_expanded CSV file is over 350MB and is causing issues with the search bundle replication. Can this lookup be changed to a kvstore instead? I did try and convert it but it reverts back to a file based lookup automatically?
I have added a monitor stanza for the log folder which contains log files that I want to ingest into Splunk. I have set sourcetype for each log file in props.conf but in some Splunk version(like 7.... See more...
I have added a monitor stanza for the log folder which contains log files that I want to ingest into Splunk. I have set sourcetype for each log file in props.conf but in some Splunk version(like 7.3.3, 8.0.0, 8.0.1) it is not working and Splunk set sourcetype for those log files to one of the following:     1) log_file_name-too_small     2) log_file_name-{digit}(like log_file_name-2, log_file_name-4) I have read some answers like this is happening because of the small size of the log file, but this is not an issue for some Splunk version(like 8.0.4), this is happening for windows and Linux both(mostly with windows). I have tried the below approaches in props.conf but none of them seem to be working. 1) [source::.../etc/apps/<app_name>/local/logs/log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     2) [source::...*etc*apps*<app_name>*local*logs*log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     3) [source::....*etc.*apps.*<app_name>.*local.*logs.*log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     4) [source::...(.)*etc(.)*apps(.)*<app_name>(.)*local(.)*logs(.)*log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     5) [source::...(.*)etc(.*)apps(.*)<app_name>(.*)local(.*)logs(.*)log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     6) [source::...\\etc\\apps\\<app_name>\\local\\logs\\log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     7) [source::...\etc\apps\<app_name>\local\logs\log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     [source::C:\\Program Files\\Splunk\\etc\\apps\\<app_name>\\local\\logs\\log_file.log(.\d+)?]        sourcetype = <sourcetype-name>     9) [source::C:\Program Files\Splunk\etc\apps\<app_name>\local\logs\log_file.log(.\d+)?]        sourcetype = <sourcetype-name> I have tried all of this because I thought in windows it might be an issue with path separator but none of them are working, but I got one solution that is working and giving right sourcetype which is like this. In props.conf:     [source::...log_file.log(.\d+)?]     sourcetype = <sourcetype-name> but I don't wont to rely upon this approach because it is possible that the same log file name is present in some other apps so it may get into that way and also this approach is time-consuming as it will going to find the file in all folders. I have tried to solve this in another way like providing sourcetype stanza of "log_file_name-too_small" sourcetype and changing sourcetype with help of transform.conf file, it is working for "log_file_name-too_small" as below. In props.conf: [log_file_name-too_small] TRANSFORMS-remove_too_small_sourcetype = remove_too_small In transform.conf: [remove_too_small] DEST_KEY = MetaData:Sourcetype REGEX = .* FORMAT = sourcetype::<sourcetype-name> but as I mentioned above that sourcetype value might be "log_file_name-{digit}" so I need to do solve this the same way as above (like specifying [log_file_name-2]) but I think this is not the right way as the value of digit may be anything, so I tried regex (log_file_name-*) in sourcetype stanza of props.conf but it is not working maybe because sourcetype stanza does not allow regex. It would be great if anyone able to solve this problem. Regards.
There are approximately 1.5 Billion ingested entries from 40 forwarders. Performing a search with any criteria on Windows hosts lists all events as all-time. Performing the same search on Linux... See more...
There are approximately 1.5 Billion ingested entries from 40 forwarders. Performing a search with any criteria on Windows hosts lists all events as all-time. Performing the same search on Linux hosts only returns 24 hours of data, regardless of time/date ranges supplied. Each day the data only covers the last 24. What settings could be causing this?
I have the following timechart, that I display in a column chart, where I use the average value as an overlay. timechart span=1d avg(time), count However, if possible, I'd like a second overla... See more...
I have the following timechart, that I display in a column chart, where I use the average value as an overlay. timechart span=1d avg(time), count However, if possible, I'd like a second overlay that should show a flat line with the average time over the entire search period, not just the daily span for the timechart.
I have an events for each device with multiple checks as below and i want to find the device count which has "Pass" on all the fields and the device count which has "Fail" in even one field Devic... See more...
I have an events for each device with multiple checks as below and i want to find the device count which has "Pass" on all the fields and the device count which has "Fail" in even one field Device1 check1: Pass check2: Fail check3: Pass Device2 check1: Pass check2: Pass check3: Pass Device3 check1: Fail check2: Fail check3: Pass I'm looking something similar to this Healthy_Device_Count =1 Un_Healthy_Device_Count=2
I run Universal Forwarder 8.0.3 & Splunk Add-on for Unix and Linux 8.0.0 on AIX 7.1 while I found no event came to index = OS after I used ps -ef | grep splunk I found some script ex. Iostat.sh ,c... See more...
I run Universal Forwarder 8.0.3 & Splunk Add-on for Unix and Linux 8.0.0 on AIX 7.1 while I found no event came to index = OS after I used ps -ef | grep splunk I found some script ex. Iostat.sh ,cpu.sh ... pending in the queue after we kill the jobs ex. Iostat.sh ,cpu.sh ... ,the event came to index and the schedule scripts were running agent . how can I trouble shooting the issue ????
What is the use of command modifier in layman terms, please I don't know what it does apart from the understanding that it modifies the commands?
I am trying to monitor deployer and search head service status using _internal logs. Which fields should I consider to monitor whether Splunk service on deployer and SH are up and running? Note: I... See more...
I am trying to monitor deployer and search head service status using _internal logs. Which fields should I consider to monitor whether Splunk service on deployer and SH are up and running? Note: I am building a dashboard to monitor splunk service status
I have a problem on this search below for last 25 days: index=syslog Reason="Interface physical link is down" OR Reason="Interface physical link is up" NOT mainIfname="Vlanif*" "nw_ra_a98c_01.34_k... See more...
I have a problem on this search below for last 25 days: index=syslog Reason="Interface physical link is down" OR Reason="Interface physical link is up" NOT mainIfname="Vlanif*" "nw_ra_a98c_01.34_krtti" Normally field7 values are like these ones: Region field7 Date mainIfname Reason count ASYA nw_ra_m02f_01.34pndkdv may 9 GigabitEthernet0/3/6 Interface physical link is up 3 ASYA nw_ra_m02f_01.34pldtwr may 9 GigabitEthernet0/3/24 Interface physical link is up 2 But recently they wee like this: 00:00:00.599 nw_ra_a98c_01.34_krtti 00:00:03.078 nw_ra_a98c_01.34_krtti I think problem may be related to: It started to happen after the disk free alarm. (-Cri- Swap reservation, bottleneck situation, current value: 95.00% exceeds configured threshold: 90.00%. : 07:17 17/02/20) Especially This is not about disk, it's about swap space, the application finishes memory and then goes to swap use. There was memory increase before, but obviously it was insufficient, it is switching to swap again. I need to understand: ''Why they use so many resources?''
In Tag section of the ES Incident Review Page, is it possible to have specific tags selectable, rather than having to type them in? Or is it better to modify the page to add another multiselect fie... See more...
In Tag section of the ES Incident Review Page, is it possible to have specific tags selectable, rather than having to type them in? Or is it better to modify the page to add another multiselect field to filter by specific tags, or add a multiselect field to add specific search terms?
Im trying to update a role in our environment via the Splunk REST API and Im using POSTMAN like app with an input file which is holding several changes in parameters for the specified role. The post... See more...
Im trying to update a role in our environment via the Splunk REST API and Im using POSTMAN like app with an input file which is holding several changes in parameters for the specified role. The post call looks like this: POST https://localhost:8089/services/authorization/roles/rl_user Authorization: Bearer xxx < ...\input.txt And this is the content of the input file: srchIndexesAllowed=main;srchJobsQuota=3;srchDiskQuota=300 Even though this is working quite fine, editing the input file manually like this is really unpractical. What I would like to know is whether there is an option to first call the API to export the role in atom(xml)/json format. Take that export, update the values I need and then import the file again via a POST API call to change the params? I`ve been playing with this for quite some time but no luck. Any advise is appreciated. Thanks.
Hello All, Sorry to ask a silly question, I had a look around, but unable to find a solution. When we set an alert in Splunk, there is an Expires Parameter. I understand this is TTL for the Ale... See more...
Hello All, Sorry to ask a silly question, I had a look around, but unable to find a solution. When we set an alert in Splunk, there is an Expires Parameter. I understand this is TTL for the Alert (Sorry if I have misunderstood it). I don't want my Alert to Expire. How can I achieve this please? If there is no means to achieve this, is there a way to trigger a notification, when that alert is about to expire please? I tried couple of options in alert setting, to see if splunk triggers a notification when an alert expires, I am afraid no notification was triggered. For example set "Trigger Condition", "Trigger Time" and set the alert to Expire in 10 mins. The alert Expired but no notification was triggered via email. I had a feeling it won't work, as Trigger Condition means - The condition that triggers the alert and NOT alert expiry - but just tried my luck! Best Regards,
i have a field "add_time" with the values as "05-27-2020 08:57:34.024" i want to create a field which will show 45 days ahead of the given time. i.e output should be "07-11-2020 08:57:34.024" pl... See more...
i have a field "add_time" with the values as "05-27-2020 08:57:34.024" i want to create a field which will show 45 days ahead of the given time. i.e output should be "07-11-2020 08:57:34.024" please help me in writing this spl. Thanks in advance
Hi Team, HF has been installed in a server, connectivity has been created to splunk, but we are not able to see any logs in splunk. We have two different hosts. For one of the hosts we are abl... See more...
Hi Team, HF has been installed in a server, connectivity has been created to splunk, but we are not able to see any logs in splunk. We have two different hosts. For one of the hosts we are able to see the logs, but not able to see the logs for another host. Note: 1) Host2 is using the same index name and log files are placed in same path as of host 1
Hi Splunkers, We have the following environment: • Splunk - 8.0.0 • OS – Windows server 2016 • Splunk db_connect_app – 3.2.0/3.3.1 • Python – python3 • Jre – 1.8 NOTE: Machine has timez... See more...
Hi Splunkers, We have the following environment: • Splunk - 8.0.0 • OS – Windows server 2016 • Splunk db_connect_app – 3.2.0/3.3.1 • Python – python3 • Jre – 1.8 NOTE: Machine has timezone variable set (TZ) With the above configurations splunk db_connect_app throws exception on the UI as “Not able to communicate with task server” and this is due to the fact that dbx_logging _formatter.py throws an exception while calling this line “os.unsetenv(‘TZ’)” and the exception is as follows: “module ‘os’ has no attribute ‘unsetenv’” After looking at python3 sdk, we found unsetenv method is not present in the os module. This particular piece of code has to be replaced by os.putenv(‘TZ’, None), when running with python3 Please let us know, if this is an issue or there is some work around. But we can not unset ‘TZ’ as a work around and we can not degrade python3 to python2 TIA Hanika
Recently we encountered a problem. /opt file system on the indexer server has reached 100% due to which users were unable to do search. we found that /opt/splunk/archive/main folder is consuming mo... See more...
Recently we encountered a problem. /opt file system on the indexer server has reached 100% due to which users were unable to do search. we found that /opt/splunk/archive/main folder is consuming most of the disk space (499 GB out of 500 GB). This is the folder which contains frozen data. Please find the below line in the configuration of main index in indexes.conf coldToFrozenDir = /opt/splunk/archive/main [main] coldPath = $SPLUNK_DB/defaultdb/colddb bucketRebuildMemoryHint = 0 compressRawdata = 1 syncMeta = 1 frozenTimePeriodInSecs = 15552000 enableOnlineBucketRepair = 1 homePath = $SPLUNK_DB/defaultdb/db enableDataIntegrityControl = 0 coldToFrozenDir = /opt/splunk/archive/main thawedPath = $SPLUNK_DB/defaultdb/thaweddb enableTsidxReduction = 0 maxTotalDataSizeMB = 50000 Can we delete frozen data?
Hi at all, I tried to use the Config Explorer app on a stand-alone Splunk server (on italian Windows 10), but when opening it I have the following error message: An error occurred! init: An excep... See more...
Hi at all, I tried to use the Config Explorer app on a stand-alone Splunk server (on italian Windows 10), but when opening it I have the following error message: An error occurred! init: An exception of type TypeError occurred. Arguments: ('environment can only contain strings',) and then I haven't any function or dashboard or interface. Is there an installation or configuration action to perform to start the app? Ciao. Giuseppe