All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, Today my Controller got stuck with high CPU usage, i restarted it and now for some weird reason no one can login i can only acess the /controller/admin.jsp Inside the server log i got a l... See more...
Hi there, Today my Controller got stuck with high CPU usage, i restarted it and now for some weird reason no one can login i can only acess the /controller/admin.jsp Inside the server log i got a lot of entries like this: [#|2021-12-06T22:39:37.411-0300|INFO|glassfish 4.1|javax.enterprise.system.core.security|_ThreadID=69;_ThreadName=http-listener-1(13);_TimeMillis=1638841177411;_LevelValue=800;_MessageID=NCLS-SECURITY-05046;|Audit: Authentication refused for [singularity-agent@customer1].|#] [#|2021-12-06T22:39:37.412-0300|INFO|glassfish 4.1|javax.enterprise.system.core.security.com.sun.enterprise.security.jmac.callback|_ThreadID=69;_ThreadName=http-listener-1(13);_TimeMillis=1638841177412;_LevelValue=800;|jmac.loginfail|#] I even tried to create a new Account but now when i log in everything is blank.. in the controller did i break something? Is it possible to recover what i "destroyed" Any help is appreciated.
Hey everyone If an event is added to a case as evidence, it's simple to retrieve it while looking at the case: Sources -> Cases -> Click on Case -> Evidence and look at Associated Events But this ... See more...
Hey everyone If an event is added to a case as evidence, it's simple to retrieve it while looking at the case: Sources -> Cases -> Click on Case -> Evidence and look at Associated Events But this is only useful if the events were added as evidence. If they were not added as evidence, then is there a way of listing them through a case? Thanks.
Hello, I am getting following warring message when I was trying to extract fields from SPLUNK UI (web Console). I could extract the fields, but my extracted fields are not showing up in my search/qu... See more...
Hello, I am getting following warring message when I was trying to extract fields from SPLUNK UI (web Console). I could extract the fields, but my extracted fields are not showing up in my search/queries. But I can see the list of the extraction (or extracted fields list) under the "Setting-Fields-Field extractions" in SPLUNK UI (Web Console.   What does this warning message means and why my fields are noy showing up in my search/queries. Any help will be highly appreciated. Thank you so much.   Warning Message:    
When using the Expand your search feature, the Expanded Search String output is stripped of any custom formatting, particularly newlines. When expanding a search, the macro should be expanded  and... See more...
When using the Expand your search feature, the Expanded Search String output is stripped of any custom formatting, particularly newlines. When expanding a search, the macro should be expanded  and inserted verbatim instead and the formatting should be retained in the Expanded Search String pane.
I have a date column that I'm trying to convert to %m/%d/%Y. The date stamp is a little complex but I got it to work until daylight savings took affect. Now anything with a timezone offset that has a... See more...
I have a date column that I'm trying to convert to %m/%d/%Y. The date stamp is a little complex but I got it to work until daylight savings took affect. Now anything with a timezone offset that has a non-zero number in the third digit, -0480 for example, returns blank. Below is my query... | inputlookup DateStampConvert.csv | rename "System Name" as systemName | rename "Date Stamp" as DateStampDate | eval dateStamp=strftime(strptime(DateStampDate, "%b %d %Y %H:%M:%S %z"), "%m/%d/%Y") | table systemName dateStamp | outputlookup dateStamp.csv Is there something I'm missing?
Hey Splunk Gurus- I'm attempting to calculate the duration between when an event was first identified (which is an entry in the event "alert.created_at") and the "_time" timestamp. I'm able to calc... See more...
Hey Splunk Gurus- I'm attempting to calculate the duration between when an event was first identified (which is an entry in the event "alert.created_at") and the "_time" timestamp. I'm able to calculate this timestamp difference using strptime("alert.created_at") but the conversion of that time to epoch is relative to the viewers timezone.  The duration changes based on how you configure the Splunk UI timezone. The "_time" field is set to "current" in props.conf Here's my current search:   index=* alert.tool.name=* action="fixed" | eval create_time=strptime('alert.created_at', "%Y-%m-%dT%H:%M:%SZ") | eval duration = _time - create_time     Here's a sample of the log:   { "action": "fixed", "alert": { "number": 2, "created_at": "2021-11-22T23:49:19Z" } }     When I execute this search while my UI preferences are set to "GMT" the result is 1183959 which is the correct duration.  When I set that preference to "PST", the result is 1155159.  That number is wrong by exactly 8 hours. Any suggestions on how to deal with this?  I'm fine with either a search-time solution or a config change in props.conf if that's best. Thanks!  
I have a Linux server with splunk enterprise 6.5. However, my team manager want me to upgrade the splunk from 6.5 to 8.2. I couldn't find old releases from splunk download page. How can I upgrade it... See more...
I have a Linux server with splunk enterprise 6.5. However, my team manager want me to upgrade the splunk from 6.5 to 8.2. I couldn't find old releases from splunk download page. How can I upgrade it?  
Hi all, I would like to know if there is a way to group multiple values from repeated fields that are coming in the same log, for example, taking into account the following log event containing the ... See more...
Hi all, I would like to know if there is a way to group multiple values from repeated fields that are coming in the same log, for example, taking into account the following log event containing the following data: Log1: moduleName="Module A" moduleType="TypeA" moduleName="Module B" moduleType="TypeB" Log2: moduleName="Module C" moduleType="TypeC" moduleName="Module A" moduleType="TypeA" I tried something like: app_search_criteria | stats count by moduleName | sort -count But this way it's only bringing data for the first moduleName field it finds in one log and not for all of them, for example, I'm getting the following table: moduleName         count ModuleA                     1 ModuleC                     1 The ideal approach would be: moduleName         moduleType       count ModuleA                      TypeA                   2 ModuleB                      TypeB                   1 ModuleC                      TypeC                   1 Thanks in advance!
I have user A that is getting 3 different roles. Normally this isn't an issue, but one of those roles has a restricted search in it that will only show 4 servers in the main index. 2 of the 3 roles ... See more...
I have user A that is getting 3 different roles. Normally this isn't an issue, but one of those roles has a restricted search in it that will only show 4 servers in the main index. 2 of the 3 roles just grants access to specific indexes. The 3rd role grants access to the main index and has the following restriction: (host::serverA OR host::serverB OR host::serverC OR host::serverD)  The issue that I am having is that restriction is carrying over to the other roles.  How would I set this up that only those 4 servers are looked for in main without having those restrictions carry over to the other roles.
Hi All, How to search the internal logs of the remote agent (UF) node via Splunk Portal ?  I am trying to troubleshoot why the logs are not ingested into Splunk from the remote agent node, I did si... See more...
Hi All, How to search the internal logs of the remote agent (UF) node via Splunk Portal ?  I am trying to troubleshoot why the logs are not ingested into Splunk from the remote agent node, I did simple search query from the search head console. index="_internal"  sourcetype="splunkd.log"  host="test1"  but unable to get any result, so please do let me know how to search the internal log details from the search head portal. When I log into the UF server I can see the following information  Error | Warn | Info details  from the splunkd.log  but my intension is to check the same from the Splunk console . Kindly guide me on the same.  
I'm having more strange situations with my UF ingesting many big files. OK, I managed to make the UF read the current Exchange logs reasonably quickly (it seems that there were some age limits left ... See more...
I'm having more strange situations with my UF ingesting many big files. OK, I managed to make the UF read the current Exchange logs reasonably quickly (it seems that there were some age limits left ridiculously high by someone so there were many files to check). So now there are several dozens (or even hundreds) files tracked by splunkd but it seems to work somehow. The problem is that I also monitor another quite quickly growing file on this UF. And it's giving me headache. Some time after the UF starts, if restarted mid-day, I get TailReader - Enqueuing a very large file=\\<redacted> in the batch reader, with bytes_to_read=9565503150, reading of other large files could be delayed OK, that's understandable - the batch reader is supposed to be more effective at reading a single big file at once, why not. But the trick is - the file is not getting ingested. I don't see any new events in the index. And I checked with procexp64.exe from SysInternals and handle64.exe - the file is not open by splunkd.exe at all. So where is my file??? Other files are being monitored and the data is getting ingested.
Hello All, Need an search query where i can see all the index logs by |stats by count, date, index. Tried the below search query but it didnt helped : index= * source=*license_usage.log type="Usage... See more...
Hello All, Need an search query where i can see all the index logs by |stats by count, date, index. Tried the below search query but it didnt helped : index= * source=*license_usage.log type="Usage" splunk_server=* earliest=-2month@d | eval Date=strftime(_time, "%Y/%m/%d") | eventstats sum(b) as volume by idx, Date | eval MB=round(volume/1024/1024,5) | timechart first(MB) AS volume by idx
hi, We need to configure the TA-elasticsearch-data-integrator---modular-input app and we receive data. The problem is : we do receive data, but not all... here is the app conf: Name ALogName ... See more...
hi, We need to configure the TA-elasticsearch-data-integrator---modular-input app and we receive data. The problem is : we do receive data, but not all... here is the app conf: Name ALogName Intervalle 3600 Index MyIndex Statut Activated Elasticsearch instance URL: MyName Port #: MyPort Use SSL 1 Verify Certs 1 CA Certs Path: /my/ca.pem User: MyUser Secret / Password: MyPassword Elasticsearch Indice: MyIndice Elasticsearch Date field name: @timestamp Time Preset: 30d Custom Source Type: json If i use CLI, with the exact same configuration, except i use match, I receive the good datas. curl -u "MyUser:MyPassword" -k "https://MyName:MyPort/MyIndice/_search?&scroll=1m&size=1000" -H 'Content-Type: application/json' -d'{"query": {"match": {"message": "MyMessage"}}, "sort": { "@timestamp": "desc" }}' {"_scroll_id":"[...]","took":695,"timed_out":false,"_shards":{"total":8,"successful":8,"skipped":0,"failed":0},"hits":{"total":{"value":3,"relation":"eq"},"max_score":null,"hits":[...MyData...] here is the logs of the app: 2021-12-06 13:29:00,073 INFO pid=26584 tid=MainThread file=base.py:log_request_success:271 | POST https://MyName:MyPort/MyIndice/_search?scroll=2m&size=1000 [status:200 request:0.870s] 2021-12-06 13:37:12,701 WARNING pid=26584 tid=MainThread file=base.py:log_request_fail:299 | POST https://MyName:MyPort/_search/scroll [status:404 request:0.076s] 2021-12-06 13:37:12,703 INFO pid=26584 tid=MainThread file=base.py:log_request_success:271 | DELETE https://MyName:MyPort/_search/scroll [status:404 request:0.002s] 2021-12-06 13:37:12,705 ERROR pid=26584 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/elasticsearch_json.py", line 104, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/input_module_elasticsearch_json.py", line 109, in collect_events for doc in res: File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/helpers/actions.py", line 589, in scan body={"scroll_id": scroll_id, "scroll": scroll}, **scroll_kwargs File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/client/utils.py", line 168, in _wrapped return func(*args, params=params, headers=headers, **kwargs) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/client/__init__.py", line 1513, in scroll "POST", "/_search/scroll", params=params, headers=headers, body=body File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/transport.py", line 415, in perform_request raise e File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/transport.py", line 388, in perform_request timeout=timeout, File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/connection/http_urllib3.py", line 275, in perform_request self._raise_error(response.status, raw_data) File "/opt/splunk/etc/apps/TA-elasticsearch-data-integrator---modular-input/bin/ta_elasticsearch_data_integrator_modular_input/elasticsearch/connection/base.py", line 331, in _raise_error status_code, error_message, additional_info elasticsearch.exceptions.NotFoundError: NotFoundError(404, 'search_phase_execution_exception', 'No search context found for id [9884105]') Any help would be great, thanks!
We're trying to configure HTTPS API feed  that will push logs from Zscaler Cloud service into an HTTPS API-based log collector on the SIEM using trail Splunk Cloud platform. Please advice    Thanks... See more...
We're trying to configure HTTPS API feed  that will push logs from Zscaler Cloud service into an HTTPS API-based log collector on the SIEM using trail Splunk Cloud platform. Please advice    Thanks Asif
Hi,   we like to fetch application logs from a windows server  which are stored Windows Event Store in windows-application. Windows Application Event Log Name : MSSQL_xxx  Windows Event Log Sourc... See more...
Hi,   we like to fetch application logs from a windows server  which are stored Windows Event Store in windows-application. Windows Application Event Log Name : MSSQL_xxx  Windows Event Log Source : XYX I found: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/MonitorWindowseventlogdata#Specify_global_settings_for_Windows_Event_Log_inputs But is there a possibility to filter in the input-stanza like this [WinEventLog://Application/MSSQL_xxx] include = XYX renderXml=1 sourcetype=XmlWinEventLog   Thanks.
Hi Team, I want to monitor my Unix Server CPU usage . If the CPU usage exceeds 90% then needs to send alert mail . Can you please help me in doing this?    
Hi, I am looking for a way to track when a new Splunk Forwarder connects along with the version. Was hoping to find some relevant field on Deployment Server (/services/deployment/server/clients) but... See more...
Hi, I am looking for a way to track when a new Splunk Forwarder connects along with the version. Was hoping to find some relevant field on Deployment Server (/services/deployment/server/clients) but I could only see lastPhoneHomeTime, nothing for when it first connected to the system. Is it possible to get this information, either from Deployment Server or Forwarder's internal logs?   Thanks, ~ Abhi  
Hi all,   someone in our environment made a field extraction and used it in a dashboard. Other users want to see these fields too, so as the admin I made the specific field extraction permissions g... See more...
Hi all,   someone in our environment made a field extraction and used it in a dashboard. Other users want to see these fields too, so as the admin I made the specific field extraction permissions global. However, the other users still don't see it. When I search the backend I see that the transforms is saved in the user folder instead of the app folder. Why does this happen and what is the solution to making the extraction available to everyone anyway, besides just copying it to an app folder? 
We have a requirement to setup ping and nslookup for hosts in different network zones and index the data into Splunk.  Planning to use the ping and nslookup search commands that comes with Netwo... See more...
We have a requirement to setup ping and nslookup for hosts in different network zones and index the data into Splunk.  Planning to use the ping and nslookup search commands that comes with Network Toolkit TA used along with Splunk map command as below. | inputlookup hostlist.csv| fields host | map search=“| ping dest=“$host$” index=index name” max searches=50000 do a similar one for nslookup as well instead of ping.   The will be setup as a schedule saved search to run every 5 mins. The hosts can be maximum around 50000.  Do you think we can use map for this scale? If not what is the optimal rows (search running every 5 mins) be used with map command for this to work efficiently?  Are there any other better options to setup this in Splunk?
I'm pulling events from remote computers using WMI as described in the splunk docs. Everything seems to be going quite well except... sometimes I encounter something like that in my logs: Failed to ... See more...
I'm pulling events from remote computers using WMI as described in the splunk docs. Everything seems to be going quite well except... sometimes I encounter something like that in my logs: Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Mon Dec 6 12:22:22 2021). Context: source=WMI:WinEventLog:Application|host=<redacted>|WMI:WinEventLog:Application|1 Which is quite surprising since I thought that WMI-pulled events should have proper timestamp created from the event timestamp on the source machine. Anyone encountered such issue?