All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello - We are trying to determine how to create an alert to tell us when other users create alerts. I'm aware this is somewhat recursive thinking. index=_internal sourcetype=scheduler user=ma... See more...
Hello - We are trying to determine how to create an alert to tell us when other users create alerts. I'm aware this is somewhat recursive thinking. index=_internal sourcetype=scheduler user=maidman | eval is_realtime=if(searchmatch("sid=rt* OR concurrency_category=real-time_scheduled"),"yes","no") |table savedsearch_name, user, date_hour, date_minute Tells me when an alert ran but not the creation date.
I am producing data like this in an alert that will throw an email, which is needed. I'm attempting to control the email Subject and Message. I need to make an adjustment though. If all of the status... See more...
I am producing data like this in an alert that will throw an email, which is needed. I'm attempting to control the email Subject and Message. I need to make an adjustment though. If all of the statuses are "SUCCEEDED" then I need to show that in the Subject and in the message. However, if any of the status are something other than "SUCCEEDED" then I need the Subject and the message to show that. NOTE: There will always be 5 items; That part is working as needed. item Status Message 1 SUCCEEDED Success Message 2 SUCCEEDED Success Message 3 SUCCEEDED Success Message 4 FAILED Failure Message 5 SUCCEEDED Success Message Approach creating the above | eval subject= if(status="Failure","FAILED","SUCCEEDED") | eval message= if(status="Failure","Failure Message","Success Message") | rename affected_ci as URL, subject as Status, event_date_time as Date | table item, status, message, What I'm needing is item Status Message Subject_Value Email_Message 1 SUCCEEDED Success Message Failure Failure Message 2 SUCCEEDED Success Message Failure Failure Message 3 SUCCEEDED Success Message Failure Failure Message 4 FAILED Failure Message Failure Failure Message 5 SUCCEEDED Success Message Failure Failure Message The idea here is, I need to pass the subject and email message into every row, then use the $result.Subject_Value$ and $result.Email_Message$ in the appropriate field.
Dear InterMapper. Thanks very much for the app. Do you have plans to update InterMapper App for Splunk for compatibility with 7.x, 8.x, Splunk Cloud, and CIM compliance? If so, could you ple... See more...
Dear InterMapper. Thanks very much for the app. Do you have plans to update InterMapper App for Splunk for compatibility with 7.x, 8.x, Splunk Cloud, and CIM compliance? If so, could you please share any info about plans / timeline? If you are not planning to evolve this app, have you considered putting it up in github.com to enable contributions from community of users/developers?
I have Windows Defender ATP Modular Inputs TA installed on a heavy forwarder. Initially got some data ingested but after sometime no new data has been coming in and I see error in logs saying duplica... See more...
I have Windows Defender ATP Modular Inputs TA installed on a heavy forwarder. Initially got some data ingested but after sometime no new data has been coming in and I see error in logs saying duplicate event hence execution getting skipped. This has been happening for many days now. Microsoft confirmed there is new data available to ingest. @thambisetty Could you please help. Is this normal and need to be fixed by Microsoft? 4/30/20 8:42:32.766 AM 2020-04-30 08:42:32,766 INFO pid=18777 tid=MainThread file=base_modinput.py:log_info:293 | Exiting..End of the script 4/30/20 8:42:32.766 AM 2020-04-30 08:42:32,766 DEBUG pid=18777 tid=MainThread file=base_modinput.py:log_debug:286 | Duplicate event found. skipping... 4/30/20 8:42:32.766 AM 2020-04-30 08:42:32,766 DEBUG pid=18777 tid=MainThread file=binding.py:new_f:71 | Operation took 0:00:00.002918 4/30/20 8:42:32.765 AM 2020-04-30 08:42:32,765 DEBUG pid=18777 tid=MainThread file=connectionpool.py:_make_request:387 | "GET /servicesNS/nobody/TA_windows-defender/storage/collections/data/TA_windows_defender_checkpointer/thre_obj_checkpoint HTTP/1.1" 200 114 4/30/20 8:42:32.763 AM 2020-04-30 08:42:32,763 DEBUG pid=18777 tid=MainThread file=binding.py:get:664 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA_windows-defender/storage/collections/data/TA_windows_defender_checkpointer/thre_obj_checkpoint (body: {}) 4/30/20 8:42:32.760 AM 2020-04-30 08:42:32,760 DEBUG pid=18777 tid=MainThread file=connectionpool.py:_make_request:400 | https://wdatp-alertexporter-us.securitycenter.windows.com:443 "GET //api/alerts?sinceTimeUtc=2020-04-30%2000:28:14.600626 HTTP/1.1" 200 2208
Hello All, I need to know if anyone has been ingesting events into Splunk from the Versa-Analytics manager using the the Versa Log Collector and Exporter process. If you have were there any iss... See more...
Hello All, I need to know if anyone has been ingesting events into Splunk from the Versa-Analytics manager using the the Versa Log Collector and Exporter process. If you have were there any issues initially that you had to over come? How did you resolve them? And was the outcome as good as promised when working with Versa? I ask because I'm to head down the same path and would like to work smart versus hard. Cheers TimW
Hi Is it possible convert, in linechart visualization, y-axis from decimal in scientific notation? Thanks
still a newbie, need help or ideas on how to check the status of a server if it's changed or stayed the same within the hour. here's an example of the status of a server.
I'm using the 15-day Splunk Cloud trial and I'm doing the fundamentals 1 free course. In lab 3 they have us create a new user but when I go to settings, there is not button for users there. Is it in ... See more...
I'm using the 15-day Splunk Cloud trial and I'm doing the fundamentals 1 free course. In lab 3 they have us create a new user but when I go to settings, there is not button for users there. Is it in a different spot? Do I not have access to that?
i am trying to forward logs from a windows server to a linux splunk enterprise using the universal forwarder. the application.evtx file was transfered to folder D:\Archive_Logs\Application_Logs\Appli... See more...
i am trying to forward logs from a windows server to a linux splunk enterprise using the universal forwarder. the application.evtx file was transfered to folder D:\Archive_Logs\Application_Logs\Application.evtx instead of the regular folder where application logs are stored. I used the inputs.conf to monitor the file using [monitor://d:\Archive_Logs\Application_Logs\Application.evtx] . It seems to have ingested it but i only got 1 event with unreadable data. This is the same unreadable data when I try to use Add Data In feature in splunk. I read the document from https://docs.splunk.com/Documentation/Splunk/8.0.3/Data/MonitorWindowseventlogdata and says there are some issues about using linux splunk for monitoring windows event logs. Not sure why this is not working because we also have other servers with windows event logs being sent to the same linux splunk enterprise but those are using the regular [WinEventLog://Application] input. Why does this happen and how can i get our logs sent to splunk? We have a splunk deployment with a deployment master pushing apps to windows servers.
Hi, We are sending directly to SplunkCloud but recent changes we wanted to send to heavy forwarder and then send to SplunkCloud. Followed this official document but can we sent to windows hf and se... See more...
Hi, We are sending directly to SplunkCloud but recent changes we wanted to send to heavy forwarder and then send to SplunkCloud. Followed this official document but can we sent to windows hf and sent to SplunkCloud, does this works for the current changes or it should be directly reaching to SplunkCloud. https://docs.splunk.com/Documentation/SplunkCloud/8.0.2003/Admin/WindowsGDI
Hi, I am trying to upload a file with json formatted data like below but it's not coming properly. I tried using 2 ways - When selecting sourcetype as automatic, it is creating a separate event ... See more...
Hi, I am trying to upload a file with json formatted data like below but it's not coming properly. I tried using 2 ways - When selecting sourcetype as automatic, it is creating a separate event for timestamp field. When selecting the sourcetype as _json, the timestamp is not even coming in the event. Tue 21 Apr 14:16:26 BST 2020 {"items":[{"cpu.load": "0.97","total.jvm.memory": "6039.798 MB","free.jvm.memory": "4466.046 MB","used.jvm.memory": "1573.752 MB","total.physical.system.memory": "16.656 GB","total.free.physical.system.memory": "3874.03 MB","total.used.physical.system.memory": "12.782 GB","number.of.cpus": "8"}]} Tue 21 Apr 14:16:36 BST 2020 {"items":[{"cpu.load": "0.97","total.jvm.memory": "6039.798 MB","free.jvm.memory": "4456.382 MB","used.jvm.memory": "1583.415 MB","total.physical.system.memory": "16.656 GB","total.free.physical.system.memory": "3874.439 MB","total.used.physical.system.memory": "12.782 GB","number.of.cpus": "8"}]} Is there a way to ingest/upload this data properly? Tue 21 Apr 14:16:26 BST 2020 {"items":[{"cpu.load": "0.97","total.jvm.memory": "6039.798 MB","free.jvm.memory": "4466.046 MB","used.jvm.memory": "1573.752 MB","total.physical.system.memory": "16.656 GB","total.free.physical.system.memory": "3874.03 MB","total.used.physical.system.memory": "12.782 GB","number.of.cpus": "8"}]} Tue 21 Apr 14:16:36 BST 2020 {"items":[{"cpu.load": "0.97","total.jvm.memory": "6039.798 MB","free.jvm.memory": "4456.382 MB","used.jvm.memory": "1583.415 MB","total.physical.system.memory": "16.656 GB","total.free.physical.system.memory": "3874.439 MB","total.used.physical.system.memory": "12.782 GB","number.of.cpus": "8"}]} Tue 21 Apr 14:16:46 BST 2020 {"items":[{"cpu.load": "0.84","total.jvm.memory": "6039.798 MB","free.jvm.memory": "4449.94 MB","used.jvm.memory": "1589.858 MB","total.physical.system.memory": "16.656 GB","total.free.physical.system.memory": "3867.042 MB","total.used.physical.system.memory": "12.789 GB","number.of.cpus": "8"}]}
After we upgraded TA-MS-AAD from version 2.0.2 to 2.1.0 we only get 100 userID returned from Azure AD on each query iteration. We suspect there is something wrong in the underlying python query tha... See more...
After we upgraded TA-MS-AAD from version 2.0.2 to 2.1.0 we only get 100 userID returned from Azure AD on each query iteration. We suspect there is something wrong in the underlying python query that hit some kind of maximum return limit 100, When I reversed back to version 2.0,2 the results is back to normal. Any ide why this issue occurs in version 2.1.0?
Hello, This is what my field extraction looks like in the GUI: Name- source::/home/user/logs/* : EXTRACT-request_id Type- Inline Extraction/Transform- Request\sID:\s(?P<request_id>[0-9a... See more...
Hello, This is what my field extraction looks like in the GUI: Name- source::/home/user/logs/* : EXTRACT-request_id Type- Inline Extraction/Transform- Request\sID:\s(?P<request_id>[0-9a-zA-Z\:\.\-\@]+) App- search What is the best way to configure props.conf and transforms.conf for this change? What content do I need to add to both files? Thanks
Can anyone help me with navigation's, I have created 2 app's. In test app i have a dashboard , when i clicked my panel its link's to sample dashboard . This is well and good what i expected. My pro... See more...
Can anyone help me with navigation's, I have created 2 app's. In test app i have a dashboard , when i clicked my panel its link's to sample dashboard . This is well and good what i expected. My problem is that when i navigated from test to sample, the app name also changes from test to sample . I want "app name : test " always even though i navigated to sample . Is that possible to link dashboard of different app by displaying current app name always.
I wrote python script for modual input in splunk add-on builder, as of now i am able to fetch logs but each time I run, the same logs are indexed in to indexer. I want to implement code in such a wa... See more...
I wrote python script for modual input in splunk add-on builder, as of now i am able to fetch logs but each time I run, the same logs are indexed in to indexer. I want to implement code in such a way, if there is change in log data only then it has to index. Can some one please help me.
When an Alert_XYZ alert triggers and create new service now incident with correlation id like "Alert_XYZ:$result.host$", someone works on that ticket in Service Now and closes it. If the same alert (... See more...
When an Alert_XYZ alert triggers and create new service now incident with correlation id like "Alert_XYZ:$result.host$", someone works on that ticket in Service Now and closes it. If the same alert (Alert_XYZ ) triggers for the same host again on next day. Does it open a closed incident and update it ? OR It will open a new incident with the same correlation id(Alert_XYZ:$result.host$). I have a scenario which i have to implement : When an alert triggers in Service now through custom alert actions it should create an incident in servicenow . Till the time incident is not closed in servicenow same incident should be updated everytime if same alert triggers. But if the incident is closed in Servicenow then Splunk should create a new incident with new correlation ID.
Hi folks, We recently upgraded our controllers to 20.4, and a new feature we are seeing is a persistent message on the flow map page stating we have "disconnected backend database servers".  While w... See more...
Hi folks, We recently upgraded our controllers to 20.4, and a new feature we are seeing is a persistent message on the flow map page stating we have "disconnected backend database servers".  While we can turn it off for that session, once we reconnect, the message is back.  The message is confusing our users and frankly, a little annoying to have to "x" it off each time we access the flow map page.  Can we turn this off, or disable it in some way.  Thanks! The Controller releases occur every six weeks. This page lists the SaaS and on-premises Controller enhancements included in the 20.4 release.... Database Monitoring The controller provides the following visual context for the backend database on the flow map: The database icon with green mark shows that the status of the database is healthy. If there are any health rule violations, the icon changes accordingly.   If there are any disconnected backend database servers, a message is displayed to connect to the appropriate server or cluster in Database Visibility.
I forgot pass4SymmKey for Splunk Indexer cluster. Is there any way to restore it in order to avoid changing the pass4Aymmkey in all cluster members? I need that pass4AymmKey in order to add a new i... See more...
I forgot pass4SymmKey for Splunk Indexer cluster. Is there any way to restore it in order to avoid changing the pass4Aymmkey in all cluster members? I need that pass4AymmKey in order to add a new indexer. [clustering] pass4SymmKey = Thanks
The Splunk Dev Blog by Luke Murphy provides details on Making a dashboard with tabs (and searches that run when clicked). It uses Simple XML CSS and JS extensions to create Tabs in Splunk Dashboard t... See more...
The Splunk Dev Blog by Luke Murphy provides details on Making a dashboard with tabs (and searches that run when clicked). It uses Simple XML CSS and JS extensions to create Tabs in Splunk Dashboard through Splunk Dashboard Tabs Example code shared on Github. This question is more to document a different approach with the use of Splunk Link List input with CSS Override and Link List change event handler to make it appear and function like a Tab in Splunk. This does so without using JS You can have many Link Lists within same Dashboard which implies many tabs within same Dashboard as per your need and implementation. Hope you find this useful
The Splunk Add-on for Windows has changed the way it reads the WindowsUpdateLog from tailing a log file to using a PowerShell script. The changes are explained here. However, the output from the Ge... See more...
The Splunk Add-on for Windows has changed the way it reads the WindowsUpdateLog from tailing a log file to using a PowerShell script. The changes are explained here. However, the output from the Get-WindowsUpdateLog command has no value, and doesn't seem to be outputting the correct logs. The logs I'm getting looks something like the following. 1600/12/31 19:00:00.0000000 768 3764 Unknown( 10): GUID=638e22b1-a858-3f40-8a43-af2c2ff651a4 (No Format Information found). 1600/12/31 19:00:00.0000000 768 3764 Unknown( 11): GUID=bce7cceb-de62-3b09-7f4f-c69b1344a134 (No Format Information found). 1600/12/31 19:00:00.0000000 768 3764 Unknown( 11): GUID=638e22b1-a858-3f40-8a43-af2c2ff651a4 (No Format Information found). 1600/12/31 19:00:00.0000000 768 3764 Unknown( 50): GUID=6ffec797-f4d0-3bda-288a-dbf55dc91e0b (No Format Information found). I also found a thread on another forum were somone seems to be having the same problem, but found no fix. Anyone have encountered the same problem? Is there any workaround?