All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good afternoon Currently you try to index data to an index summary, but these events do not contain a timestamp so the indexing remains with the date the query was run. This for splunk is a pro... See more...
Good afternoon Currently you try to index data to an index summary, but these events do not contain a timestamp so the indexing remains with the date the query was run. This for splunk is a problem because there is no timestamp the data is indexed with the same timestamp and the query is truncated when you try to find it. is there any method that allows you to add to a certain number of events plus seconds or minutes? Best regards
After upgrading from SE 7.3 to SE 8.0.1 Splunk web failed to start [splunk@ip-10-202-18-65 ~]$ /opt/splunk/bin/splunk start The splunk daemon (splunkd) is already running. [MISSLYCKADES] Wai... See more...
After upgrading from SE 7.3 to SE 8.0.1 Splunk web failed to start [splunk@ip-10-202-18-65 ~]$ /opt/splunk/bin/splunk start The splunk daemon (splunkd) is already running. [MISSLYCKADES] Waiting for web server at http://127.0.0.1:8000 to be available............................................................................................................................................................................................................................................................................................................ WARNING: web interface does not seem to be available! [splunk@ip-10-202-18-65 ~]$ cat /opt/splunk/var/log/splunk/web_service.log 2020-03-11 14:13:29,700 ERROR [5e68f209517f57edcf17d0] root:770 - 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128) Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/root.py", line 132, in from splunk.appserver.mrsparkle.controllers.top import TopController File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/top.py", line 27, in from splunk.appserver.mrsparkle.controllers.admin import AdminController File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/admin.py", line 17, in import formencode File "/opt/splunk/lib/python3.7/site-packages/formencode/init.py", line 3, in from formencode.api import ( File "/opt/splunk/lib/python3.7/site-packages/formencode/api.py", line 65, in set_stdtranslation() File "/opt/splunk/lib/python3.7/site-packages/formencode/api.py", line 58, in set_stdtranslation localedir=localedir, fallback=True) File "/opt/splunk/lib/python3.7/gettext.py", line 533, in translation t = translations.setdefault(key, class(fp)) File "/opt/splunk/lib/python3.7/gettext.py", line 260, in init self.parse(fp) File "/opt/splunk/lib/python3.7/gettext.py", line 421, in _parse catalog[str(msg, charset)] = str(tmsg, charset) UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128) 2020-03-11 14:35:51,498 INFO [5e68f747617fbbe19b7f90] __init:174 - Using default logging config file: /opt/splunk/etc/log.cfg 2020-03-11 14:35:51,499 INFO [5e68f747617fbbe19b7f90] __init:212 - Setting logger=splunk level=INFO 2020-03-11 14:35:51,499 INFO [5e68f747617fbbe19b7f90] __init:212 - Setting logger=splunk.appserver level=INFO 2020-03-11 14:35:51,499 INFO [5e68f747617fbbe19b7f90] __init:212 - Setting logger=splunk.appserver.controllers level=INFO 2020-03-11 14:35:51,499 INFO [5e68f747617fbbe19b7f90] __init:212 - Setting logger=splunk.appserver.controllers.proxy level=INFO 2020-03-11 14:35:51,499 INFO [5e68f747617fbbe19b7f90] __init:212 - Setting logger=splunk.appserver.lib level=WARN 2020-03-11 14:35:51,499 INFO [5e68f747617fbbe19b7f90] __init:212 - Setting logger=splunk.pdfgen level=INFO 2020-03-11 14:35:51,499 INFO [5e68f747617fbbe19b7f90] __init:212 - Setting logger=splunk.archiver_restoration level=INFO 2020-03-11 14:35:51,551 ERROR [5e68f747617fbbe19b7f90] root:769 - Unable to start splunkweb 2020-03-11 14:35:51,551 ERROR [5e68f747617fbbe19b7f90] root:770 - 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128) Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/root.py", line 132, in from splunk.appserver.mrsparkle.controllers.top import TopController File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/top.py", line 27, in from splunk.appserver.mrsparkle.controllers.admin import AdminController File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/admin.py", line 17, in import formencode File "/opt/splunk/lib/python3.7/site-packages/formencode/init.py", line 3, in from formencode.api import ( File "/opt/splunk/lib/python3.7/site-packages/formencode/api.py", line 65, in set_stdtranslation() File "/opt/splunk/lib/python3.7/site-packages/formencode/api.py", line 58, in set_stdtranslation localedir=localedir, fallback=True) File "/opt/splunk/lib/python3.7/gettext.py", line 533, in translation t = _translations.setdefault(key, class(fp)) File "/opt/splunk/lib/python3.7/gettext.py", line 260, in init self._parse(fp) File "/opt/splunk/lib/python3.7/gettext.py", line 421, in _parse catalog[str(msg, charset)] = str(tmsg, charset) UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128)
Is there a way to test the Elasticsearch Data Integrator with Splunk Light please? Many thanks Nathan
Hey All, Was just curious if there was a more efficient way of dropping DNS events by the actual query source rather than what I have below. [MSAD:NT6:DNS] TRANSFORMS-dropdns=dropdns [dropdns... See more...
Hey All, Was just curious if there was a more efficient way of dropping DNS events by the actual query source rather than what I have below. [MSAD:NT6:DNS] TRANSFORMS-dropdns=dropdns [dropdns] REGEX=.*IPOFSOURCE.* DEST_KEY=queue FORMAT=nullQueue
I am experimenting with events that generate data in a tabular manner and I want to create a historical graph of data from events with multivalue fields. As a test, I am logging the output of "df -hP... See more...
I am experimenting with events that generate data in a tabular manner and I want to create a historical graph of data from events with multivalue fields. As a test, I am logging the output of "df -hP" as a single event every few hours. The output looks like: /dev/mapper/vg_1-lv_home 59G 52M 56G 1% /home /dev/sda1 477M 40M 412M 9% /boot tmpfs 24G 0 24G 0% /dev/shm <...> I want to be able to extract all the fields per row by simply matching one field (the first, which equals 'device'). I know that you can do the following search: source="df -Ph" | eval var1=mvindex(device, 0) | eval ... ... | table var1, ... But this approach involves already knowing the order of the output to know which device you're selecting, which will not always be the case. Is there a way to do what I'm trying to do? NOTE: I have already set up props/transforms to do multivalue search-time extraction. What I'm trying to do now is "extract" or output only the rows that match a search for the device name (first column). Example (pseudocode): if (device == /dev/sda1) then get device.row print all fields in device.row fi
I want to show each row as a tile with different customization. 1. Based on the row value i should change the color of the tile 2. i should be able decide the no of tiles displayed in a row I ... See more...
I want to show each row as a tile with different customization. 1. Based on the row value i should change the color of the tile 2. i should be able decide the no of tiles displayed in a row I found below answer which is partially helpful and i am working on this, meanwhile if anyone done some similar work to this please share your thoughts. Link: https://answers.splunk.com/answers/735360/create-tile-or-panel-for-each-row-of-table.html Thanks in advance
Hi, First I just want to say we are using DB Connect and DB Inputs with Rising columns extensively with DB2, Oracle and SQL Server database types - all without issues. But recently we've been t... See more...
Hi, First I just want to say we are using DB Connect and DB Inputs with Rising columns extensively with DB2, Oracle and SQL Server database types - all without issues. But recently we've been trying to setup a DB Input with Netezza database and the rising column is not working - not doing what it's supposed to be doing. I have a DB Input setup to run every 5 minutes and it pulls the same record everytime. Looking at the TIME column I am using as a rising column, and the checkpoint value Splunk DBConnects uses, it looks like the checkpoint value isn't precise enough to make the where statement work as intended. The query is: SELECT * FROM "HISTDBV3"."HISTUSR"."$hist_failed_authentication_3" WHERE TIME > ? ORDER BY TIME ASC The value of the TIME column is: 2020-03-10 19:34:04.884102 The checkpoint value is (according to the Edit Input interface): 3/10/2020 19:34:04.884 So of course if with statement "WHERE TIME > ?" then it is always true (19:34:04.884102 > 19:34:04.884). Looks like a bug to me. Anyone experienced this? Thanks.
Need help with bringing together results in a multisearch. Need to match department data from AD to an email address from O365 data on 1 row for reporting. | multisearch [search index="activedi... See more...
Need help with bringing together results in a multisearch. Need to match department data from AD to an email address from O365 data on 1 row for reporting. | multisearch [search index="activedirectory" objectCategory="CN=Person*" AND sAMAccountType=805306368 AND userAccountControl!=514 AND userPrincipalName | eval ad_email=userPrincipalName | eval ad_department=department] [search index="o365data" dataset_name=account_management AssignedLicense | eval 360_email=ad_email] | table 360_email, ad_department
Can we decrease number of bundle created in /opt/splunk/var/run this folder is having its own bundles Also, i am not concerned about distributed bundles which comes in /opt/splunk/var/run/searc... See more...
Can we decrease number of bundle created in /opt/splunk/var/run this folder is having its own bundles Also, i am not concerned about distributed bundles which comes in /opt/splunk/var/run/searchpeers
I have 2 heavy forwarders receiving UF logs from about 2000 windows servers. The traffic is being split to our indexers and with a route out via syslog to 2 F5 VIPs. For a specific server I see abo... See more...
I have 2 heavy forwarders receiving UF logs from about 2000 windows servers. The traffic is being split to our indexers and with a route out via syslog to 2 F5 VIPs. For a specific server I see about 500k logs in 24 hours in Splunk. But on the receiving end of the syslog there are only 14 events. I'm pretty sure the HF's are overloaded and I've put in a request to have 2 more built, but I'm also wondering if there is any further tuning I can do. I am not finding anything specific to HF's online. Thanks.
Hi, all our UF and HF use the following for the Windows input: [WinEventLog://Security] sourcetype=XmlWinEventLog:Security renderXml=1 ... All UF and the cluster is Splunk 7.2.4.2 I rece... See more...
Hi, all our UF and HF use the following for the Windows input: [WinEventLog://Security] sourcetype=XmlWinEventLog:Security renderXml=1 ... All UF and the cluster is Splunk 7.2.4.2 I recently installed a few HF and there used the latest Splunk Code: 8.0.2 My 7.* UF arrive with the following source type and source: XmlWinEventLog:Security XmlWinEventLog My 8.* HF arrive instead with: WinEventLog:Security xmlwineventlog Any Ideas what's going wrong? I have the Splunk_TA_windows installed on the Search Head which renames all the source types, but that of course applies to all win source types. But it looks like the source type renaming only applies for the HF and it still does not explain why the source is changed as well. thx afx
Why is it not recommended to use deployment server for deploying config bundles to Search head cluster ? Why do we need a separate deployer instance and how is a deployer different from a deployment ... See more...
Why is it not recommended to use deployment server for deploying config bundles to Search head cluster ? Why do we need a separate deployer instance and how is a deployer different from a deployment server ?
Hello it seems one of the LDAP strategies has stopped working for unknown reason. I have confirmed password and the settings are correct. I have also checked the Map Groups field and confired that ... See more...
Hello it seems one of the LDAP strategies has stopped working for unknown reason. I have confirmed password and the settings are correct. I have also checked the Map Groups field and confired that the user role has been added and I am able to see all the user that should be in there under LDAP Users I have also tried reloading authentication configuration with no luck. Any help or suggestions would be greatly appreciated. Below is the message I am getting. Thanks 3/11/20 8:30:46.318 AM 03-11-2020 08:30:46.318 -0500 ERROR UiAuth - user=myuser action=login status=failure reason=user-initiated useragent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" clientip=123.123.123.123 host = abc001source = \Splunk\var\log\splunk\splunkd.logsourcetype = splunkd 3/11/20 8:30:46.318 AM 03-11-2020 08:30:46.318 -0500 ERROR UserManagerPro - LDAP Login failed, could not find a valid user="myuser" on any configured servers host = abc001source = *\Splunk\var\log\splunk\splunkd.logsourcetype = splunkd
I have multiple log events like below based on my search criteria- 2020-03-11 08:23:55,141 - [UserId=xyz | UserName=abc | INFO INFO APIName="REPORT SEARCH",Stage="exit",St... See more...
I have multiple log events like below based on my search criteria- 2020-03-11 08:23:55,141 - [UserId=xyz | UserName=abc | INFO INFO APIName="REPORT SEARCH",Stage="exit",StartTime="2020-03-11 08:23:55.101",EndTime="2020-03-11 08:23:55.141",**TotalTime**="40 Milliseconds",XBAPILatency="0 Milliseconds",XBLatency="40 Milliseconds",XBMessage="REPORT SEARCH API response was 40 Milliseconds.",RequestStatus="Success" 2020-03-11 08:23:55,151 - [UserId=xyz | UserName=abc | INFO INFO APIName="REPORT SEARCH",Stage="exit",StartTime="2020-03-11 08:23:55.101",EndTime="2020-03-11 08:23:55.151",**TotalTime**="50 Milliseconds",XBAPILatency="0 Milliseconds",XBLatency="50 Milliseconds",XBMessage="REPORT SEARCH API response was 50 Milliseconds.",RequestStatus="Success" 2020-03-11 08:23:55,161 - [UserId=xyz | UserName=abc | INFO INFO APIName="REPORT SEARCH",Stage="exit",StartTime="2020-03-11 08:23:55.101",EndTime="2020-03-11 08:23:55.161",**TotalTime**="60 Milliseconds",XBAPILatency="0 Milliseconds",XBLatency="60 Milliseconds",XBMessage="REPORT SEARCH API response was 60 Milliseconds.",RequestStatus="Success" I want to build a Splunk query which will give me average response time based on TotalTime value. I tried to do so by | stats avg(TotalTime) but no results are showing as the value contains a string (Milliseconds) as well. Can someone please help me with this as I am new to Splunk tool?
Hello team splunk. Using Splunk 8.0.1 in clustered seach head mode. I cannot delete some apps on searchhead cluster from deployer. I remove the app on the deployer from ${SPLUNK_HOME}/etc/shcluste... See more...
Hello team splunk. Using Splunk 8.0.1 in clustered seach head mode. I cannot delete some apps on searchhead cluster from deployer. I remove the app on the deployer from ${SPLUNK_HOME}/etc/shcluster/apps and spread the new bundle to the shcluster using splunk apply shcluster-bundle --answer-yes -target http://you-know:8089 the app is still present on the shcluster members under ${SPLUNK_HOME}/apps and accessible in the GUI. I noticed a log output in conf.log on the SH Captain containing {"name":"my-undeletable-app","action":"preserved"} What does "preserved" mean? Is there a way to prevent an app to be deleted? And if so, how can I "disable" prevention ? Thank you very much ivo
How Can I send alerts from splunk to netcool ? The splunk is able to send alerts to netcool omnibus?
Hi, I'm writing json NLog files from Visual Studio into Splunk (with NLog WebService target). In my Splunk search results, if I filter my search with "Add to search" it works (because of "spath... See more...
Hi, I'm writing json NLog files from Visual Studio into Splunk (with NLog WebService target). In my Splunk search results, if I filter my search with "Add to search" it works (because of "spath" so it seems, that gets added automatically): Splunk search: ...| spath Message | search Message="Villa við að...." | sort -Date (the raw json data: "Message": "Villa vi\u00f0 a\u00f0 ) \u00f0 is an Icelandic unicode character: https://www.fileformat.info/info/unicode/char/00f0/index.htm However, if I click the "Message" property value on the left in "Interesting fields", I get "No results found". The splunk search doesn't add the "spath" to the search: Splunk search: ...Message="Villa við að stofna liabilityevaluationclaimholders." | sort -Date One solution would be to automacially add "spath" whenever somebody clicks a property value in "Interesting fields". Is that possible (just like is done when you add the property value as a filter in the search results)? Or is there a more obvious solution (not requiring "spath" in the search)? Thanks a lot, Gunnar
We have an index 'abc' to which data gets fed in non-uniform intervals. I would like to get all events of this index that were indexed recently. Could i get some guidance on how to achieve this? E... See more...
We have an index 'abc' to which data gets fed in non-uniform intervals. I would like to get all events of this index that were indexed recently. Could i get some guidance on how to achieve this? Ex: Data indexed on 1st of March, 5th of March and 10th of March. I want to get all events indexed on 10th of March.
I have two multiselect option in dashboard, with same functionality, when "ALL" options is selected other values will be reset . only either one of the dropdown is working and other is not working w... See more...
I have two multiselect option in dashboard, with same functionality, when "ALL" options is selected other values will be reset . only either one of the dropdown is working and other is not working when unset token is used XML code: <input type="multiselect" token="NAME" id="ALL_RESET" searchWhenChanged="false"> <label>Select Site</label> <fieldForLabel>Name</fieldForLabel> <fieldForValue>Name</fieldForValue> <search> <query>| inputlookup </query> </search> <change> <unset token="form.Brand"></unset> </change> <choice value="*">---All Brand---</choice> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> </input> <input type="multiselect" token="Brand" id="SERVER_RESET" searchWhenChanged="false"> <label>Select Server(s)</label> <search id="SERVER_RESET"> <query>| inputlookup </query> </search> <fieldForLabel>BRANDNAME</fieldForLabel> <fieldForValue>BRANDNAME</fieldForValue> <delimiter>,</delimiter> <choice value="*">---All BRANDNAME---</choice> </input> JAvascript: var selection = []; var multi = splunkjs.mvc.Components.getInstance("ALL_RESET"); multi.on("change", function(){ // get the current selection selection = multi.val(); // check if there is more than one entry and one of them is "*" if (selection.length > 1 && ~(selection.indexOf("*"))) { if (selection.indexOf("*") == 0) { // "*" was first, remove it and leave rest selection.splice(selection.indexOf("*"), 1); multi.val(selection); multi.render(); } else { // "*" was added later, remove rest and leave "*" multi.val("*"); multi.render(); } } }); var selection = []; var multi = splunkjs.mvc.Components.getInstance("SERVER_RESET"); multi.on("change", function(value){ // get the current selection selection = multi.val(); // check if there is more than one entry and one of them is "*" if (selection.length > 1 && ~(selection.indexOf("*"))) { if (selection.indexOf("*") == 0) { // "*" was first, remove it and leave rest selection.splice(selection.indexOf("*"), 1); multi.val(selection); multi.render(); } else { // "*" was added later, remove rest and leave "*" multi.val("*"); multi.render(); } } });
Hi Team, We have installed the alert manager app on SH and it is working fine for admin but it is not working for user. When user login and access the Alert Manager app and it is fetching the d... See more...
Hi Team, We have installed the alert manager app on SH and it is working fine for admin but it is not working for user. When user login and access the Alert Manager app and it is fetching the dashboard and view from another app. Can anyone faced such a kind of issue.